𝕏

Profile: jOc

Your personal background.
boinc - enhancing research workloads for the benefit of mankind & humanity - Computer Optimisation - CPU , GPU & RAM - PC, Mac & ARM development

HPC - High Performance Computation for beneficial goals and obvious worth.

(Guide, experimentation, developer kit's and manuals)


Observing the workloads of many beneficial projects we find that commonly the workload data set is small,
In addition to the memory set being smaller or larger than a machine can compute optimally; we find that feature sets such as fae and avx have commonly not been implemented,

Some projects like asteroids at home and the seti project are using enhanced computation instruction sets ... like avx and memory loads that benefit from the 4gb or more ram that is available on decent gaming and home laptops.

Not all modern machines have loads of ram; However research and or university establishments use sufficiently powerful machines that can glow on the boinc record in full glory with a 256mb to 768mb workload,

In addition the machines are operand,xen ... commonly and servers may have such as Sparc or power pc specific hardware and instruction sets,

In order to examine examples .. below we can see workloads include small data arrays; in the 40mb to 79mb range..

In line with servers and gaming rigs .. we have 1gb of ram per core, of course not all issues require a larger array in the workload and some machines have 256mb per core !

However much Ram you allocate to the projected workload; small memory loads can and will be sufficient for data swapping and or paging (like DNA Replicators)...

Some task can sufficiently benefit from larger thread and data models, to my mind DNA and mapping data are fine examples of specific workloads; Where memory counts,

In addition thread count can be 4 or other numbers and i suggest that a single task can use more than one core and instruction set (neon for example or Symmetric threading FPU, SMT)

Specific workload optimisation, or rather generic with SSE and AVX and FPU threading and precision optimisation would be very cool while we deal with the workload running app

In particular the Ryzen multi-core is a new and exciting product,

So take care to read the guides in the lower half of the document, AVX2, RDSEED, ADX and additional encryption formats are some of the most exciting changes to the AMD Ryzen Arch.

Further thought ... Efficiency :

add a MHz/Dhrystone's/MIP'S performance per watt to each system ...
then projects will further optimise workloads to improve upon workload energy & environmental efficiency versus work carried out.

Work Hours x Mhz / (efficiency per watt)
-------
Hours / % of projects finished with work completed

Also bear in mind that GPU's need watt efficiency and task management to optimise power used versus work done....

worker priority should always be :

efficiency + merit of the work
--------
time / % necessity

Please examine the issue further.


Rupert S


HPC Computing work load Photos http://bit.ly/HPCImpact

http://bit.ly/HPC-Dev

http://bit.ly/tRNG-Dev
Your opinions about DENIS@home
http://bit.ly/2Cerius

um the denis program could use gpu to map electrocardiograms in 3D ...

and observe the beating and modulation pattern ...

to predict the future beating and stress levels ...

the denis project is awesome!

RS
Your feedback on this profile
Recommend this profile for User of the Day: I like this profile
Alert administrators to an offensive profile: I do not like this profile
Account data View
Team None
Message boards 6 posts