Fix visualization with -inf scores#380
Closed
nhuet wants to merge 1 commit intoalgorithmicsuperintelligence:mainfrom
Closed
Fix visualization with -inf scores#380nhuet wants to merge 1 commit intoalgorithmicsuperintelligence:mainfrom
-inf scores#380nhuet wants to merge 1 commit intoalgorithmicsuperintelligence:mainfrom
Conversation
When the visualizer import data from a checkpoint, this is sent to the javascript via a response object decoded with `resp.json()` in `fetchAndRender()` from "main.js". This is crashing if it does not respect fully json specs (and NaN, Infinity are not json valid even though js objects). This is useful for evolutions based on positive metrics to minimize (like a cost). In that case, we want to put -metric in combined_score (which will then be negative). Thus an evolved program not working should be given a worse score during evaluation. An easy way to do it is to put -inf (instead of not outputing any metric, which will be replaced by a 0 by default by the database when requesting a fitness). Doing so works well during evolution (ranking the top programs as expected), but during visualization, it was raising an error when fetching data.
026d48f to
6d08400
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Context

For evolutions based on positive metrics to minimize (like a cost), we want to be able to assign a negative combined_score (=-cost) to programs and assign
-infto programs not running properly so that they are always ranked worse than others (especially if we do not now a bound on the cost). Note that if we do not assign any metrics to failing programs, the database will later return a fitness of 0 when requested (e.g. when ranking top programs) and thus see the failing program as an improved program (which is obviously not what we would like).This works well during program evolution, but when using the (very nice) visualizer, loading data is failing.
More precisely the checkpoint is loaded in python code properly and then sent to the javascript via a
Responseobject decoded withresp.json()infetchAndRender()from "main.js". This is crashing if it does not respect fully json specs (and NaN, Infinity are not json valid even though js objects).Solution proposed
In this PR, we replace
-inf,+inf, andnanvalues in programs metrics byNonebefore visualizing. Thanks to that:NaNbox in the "performance" tab which seems to be the proper way to visualize them.