Status of the project // Estado del proyecto
Message boards :
News :
Status of the project // Estado del proyecto
Message board moderation
Author | Message |
---|---|
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Dear volunteers, We update you on the status of the project and how the simulations are going. In recent weeks the search algorithm has been improving the model very little. This indicates us that the optimization is close to be finished. Once this is complete we will pass to the validation phase which we will be done outside of DENIS (DENIS is not yet prepare for the required simulations). The improvement obtained is very considerable (you can see the graph with the evolution of the error below and we hope that the validation results turn out well). It can be improved, but not all the error can be removed with the optimization (it requires other type of improvements in the model). On the other hand, looking at the results, last week we started a new optimization by changing an aspect of it. In the initial search, if one of the markers was in the experimentally observed range, we did not allow the algorithm to search where that marker was outside of the experimental range (criterion of keeping at least those markers that were already in the experimental range). In the new search we are allowing it in the hope that even if some marker goes out, the overall behavior will improve. In this case, the uncertainty about the result is higher, but as you can see in the graph below, after the first iterations of the algorithm, the results are quite promising. Sincerely, Jesus. ======================================== Estimados voluntarios: Os actualizamos el estado del proyecto y cómo van las simulaciones. En las últimas semanas el algoritmo de búsqueda ha ido mejorando muy poco el modelo. Esto nos indica que está a punto de terminarse la optimización. Una vez que esto se complete, pasaremos a una fase de validación de la simulación que haremos fuera de DENIS (DENIS no está preparado para ello). La mejoría obtenida es muy considerable y esperamos que los resultados de la validación salgan bien (podéis ver la gráfica con la evolución del error debajo). Todavía hay margen de mejora, pero no todo el error se puede eliminar en la optimización (requiere otro tipo de mejoras en el modelo). Por otro lado, viendo los resultados, la semana pasada comenzamos una nueva optimización cambiando un aspecto de la misma. En la búsqueda inicial, si uno de los marcadores estaba en el rango observado experimentalmente, no permitíamos al algoritmo buscar donde ese marcador saliera del rango experimental (criterio de mantener en rango experimental al menos los que ya estaban). En la nueva búsqueda estamos permitiéndolo con la esperanza de que aunque algún marcador salga fuera, el comportamiento global mejore. En este caso la incertidumbre sobre el resultado es algo mayor, pero como podéis ver en la gráfica de debajo, tras las primeras iteraciones del algoritmo, los resultados son bastante prometedores. Atentamente, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 24 Sep 22 Posts: 9 Credit: 108,298 RAC: 0 |
Good to hear! If I understand correctly, algorithm explores more space than the neighbourhood of local attractors defined by the specific correlation of parameters (correct me if I'm wrong). It looks like the trend will be more smooth and probably will decrease the error. The next weeks will be very exciting! |
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Hi Marcin, Yes, the algorithm now explores more space, but based on an polinomical aproximation of the behavior of the model. The polinomical aproximation within the trust region is fixed with all the simulations you send to us. I am also looking forward to the results of the third iteration. Best, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 2 Aug 22 Posts: 39 Credit: 1,017,028 RAC: 0 |
Jesus, when will there be new work from your project? |
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Yesterday... but you are too fast getting the tasks... Probably, we will send more tasks tomorrow Best, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 26 Jan 23 Posts: 3 Credit: 289,236 RAC: 0 |
Hi, Jesus I hope WU to dance on Apple Silicon (macOS Ventura). ---- Chamiu (ΦωΦ) |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
I wish there were a way to supply more work, and more evenly. When work becomes available, usually once a week, my machine downloads about 100 tasks. I could make it get more, but that would risk failure to complete some on time, so that is not the answer. Alternatively, you could send out work twice a week, perhaps every three or four days. As I understand it, this is not practical either because you need the results from one batch of work before you can send out the next set of work. Perhaps if you had two related projects, each could send out the same size work once a week, but 3 or 4 days apart, so you could be studying the results of one project while we volunteers could be working on the other. That would probably increase the amount of work you would have to do, of course, and that might be too much to ask for. |
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Hi Jean-David, This is exactly our idea. Little by little, we want to add more optimizations to mantain an stable load of work. Currently, we are running 3 versions of the model optimization (each one with an small variation on the options of the algorithm). One of the limitations is that the postprocessing of your results is done out of the server, and it requieres a manual step, so the three tend to be synchronized because even if one finishes earlier, if that happens at night or during the weekend, the post-processing is done at the same time. Yesterday we sent a lot of work (from the three of them), but I guess they won't last long :-). I will publish an update of the results the next week (we are finishing the validation process of the first version of the optimization), and I will explain there the new versions we have started (the validation of the first one does not look good, so we will still need your help). Best, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
I will publish an update of the results the next week (we are finishing the validation process of the first version of the optimization), and I will explain there the new versions we have started (the validation of the first one does not look good, so we will still need your help). You will continue getting my help if you continue sending me work units. Currently I have little over 100 DENIS tasks in my machine waiting to go, and I process them four at a time. They seem to take about 65 minutes each on my main machine. And I notice the current batch on your machine is quite large, so I may get some of those too. |
Send message Joined: 7 Jun 15 Posts: 1 Credit: 21,096,543 RAC: 0 |
Why do you wait until the last moron running DENIS on his IBM 8088 green screen PC returns the final task sent out 9 years ago, before you make another batch of tasks? This is HIGHLY INEFFICIENT. |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
Why do you wait until the last moron running DENIS on his IBM 8088 green screen PC returns the final task sent out 9 years ago, before you make another batch of tasks? This is HIGHLY INEFFICIENT. If I have told you once, I have told you a million times, don't exaggerate! But a bit more seriously, there are more than one way of measuring efficiency, and perhaps the management of this project has a different idea of efficiency than you do. I admit to getting frustrated by running out of work every week, usually after three or four days. So it is not running my machine efficiently (even though I run CPDN, WCG, Rosetta, Einstein, and (grudgingly) MilkyWay and Universe). But maybe management of this project measures efficiency in terms of efficiency of its machines, its people, and other things. |
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Hello, It is not only a question of efficiency (although also). To generate a new iteration, we need all the results from the previous iteration, the 45 markers in the 1000 points. We have to wait for all the results. That's why there is a time limit (it varies depending on the simulation but it is between 2 and 3 days). If we give you time, why not wait for the time we have given you? If a user does not send the result in the established time, it is sent to another volunteer. Are there ways to not expect all the results? Yes, but they imply that part of the simulations that you would do would never be used. That, for us, is a misuse of your resources (efficiency in another sense). Can some tasks take forever if multiple users don't submit them? Yes, but in those cases we launch the last ones on a server (a few) to finish the iteration. When there are more than 200 left, it is better to wait than to execute them locally. Below that number, we earn time. It's a good balance between how long you/us have to wait and how many simulations are duplicated. Also remember that between iterations, there is a process that still involves manual actions, so part of the pause is caused by this. We hope to be able to automate as much as possible, but it is again a balance between the technical part and the scientific part of the project. We need to grow in both. We could stop everything and work on the technical part for a long time until it was perfect (without producing scientific results and practically without sending you work beyond betas), or gradually improve the technical part while we use it to already be able to do research. As already mentioned, luckily there are other projects to which you can give your time. That's why we want to be efficient: If we waste your resources, we don't just take it only from you, we take also from other scientists. This project has had its ups and downs, but we are achieving sustained growth. I think this is the way, although I understand that many of you would like another situation. I also want there to be a more constant flow (and less manual part...) To all of you, with the bit of your time that you give us, thank you very much. Best, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
A lot of that falls under what I meant by other definitions of efficiency. But Jesus explained it better that I ever could. OTOH: As already mentioned, luckily there are other projects to which you can give your time. That's why we want to be efficient: If we waste your resources, we don't just take it only from you, we take also from other scientists. This project has had its ups and downs, but we are achieving sustained growth. I think this is the way, although I understand that many of you would like another situation. I also want there to be a more constant flow (and less manual part...) True, there are other projects, but in the last few months, even about a year, important (to me) projects have been having miserable availability of work. The worst has been WCG that was pretty much down for about year, though with brief intervals where work was available. Right now it is sending work on two of their efforts, but nominally they have five. And ClimatePrediction has not been sending out work for quite a while too although now their web site is up reliably; just no work since April except for a few re-runs that timed out from other users. As a result, I am now running a lot of MilkyWay and Universe tasks that are of little interest to me, but at least they are pretty reliable about keeping their servers up and supplying work. |
Send message Joined: 24 Sep 22 Posts: 9 Credit: 108,298 RAC: 0 |
If you're interested, Einstein@Home guarantee constant work for different sub projects. Server status shows that current batches of tasks need at least few months. |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
If you're interested, Einstein@Home guarantee constant work for different sub projects. Server status shows that current batches of tasks need at least few months. Maybe an entire batch requires a few months, but individual tasks require only seven to eight hours on my Linux machine. Computer 224473 CPU type GenuineIntel Intel(R) Xeon(R) W-2245 CPU @ 3.90GHz [Family 6 Model 85 Stepping 7] Number of processors 16 Operating System Linux Red Hat Enterprise Linux Red Hat Enterprise Linux 8.7 (Ootpa) [4.18.0-425.13.1.el8_7.x86_64|libc 2.28] BOINC version 7.20.2 Memory 125.34 GB Cache 16896 KB Swap space 15.62 GB Total disk space 488.04 GB Free Disk Space 473.01 GB Measured floating point speed 6.04 billion ops/sec Measured integer speed 25.59 billion ops/sec |
Send message Joined: 19 Jan 17 Posts: 14 Credit: 73,572 RAC: 0 |
Why does this project block Russian IPs? |
Send message Joined: 24 Sep 22 Posts: 9 Credit: 108,298 RAC: 0 |
So are you looking for tasks which require 20+ hours? :) |
Send message Joined: 6 Mar 23 Posts: 37 Credit: 2,078,354 RAC: 0 |
So are you looking for tasks which require 20+ hours? :) Yes, but I am not looking for them here. I have ClimatePrediction for that. Latest ones have taken about 10 days each. But they lately sending out very little work. Here is my latest one. Task Work unit Sent Reported Status Run time (sec) CPU time (sec) Credit Application 22318648 12138603 30 May 2023, 3:38:46 UTC 9 Jun 2023, 1:20:39 UTC Completed 852,578.34 843,274.30 33,854.34 UK Met Office HadAM4 at N216 resolution v8.52 i686-pc-linux-gnu |
Send message Joined: 18 Mar 15 Posts: 284 Credit: 2,748,608 RAC: 0 |
Why does this project block Russian IPs? We are not or not in general. We have a firewall to protect the web page: we received several atacks years ago. This is managed by the IT deparment. If you indicate me the IPs you are having problems with, I can ask them. Best, Jesús. Jesús Carro Universidad San Jorge @InSilicoHeart |
Send message Joined: 19 Jan 17 Posts: 14 Credit: 73,572 RAC: 0 |
Responded in personal message. |