• Which data do we use?

Our models update their inputs and outputs once every 24 hours to maintain live relevance. In order to create the most accurate predictions while minimizing information overload, we carefully pick which data we decide to use. In particular, our algorithms ping the servers of John Hopkins COVID-19 Research Center for the day's number of cases, cases by state, population by state, population density, and public transportation usage. We then factor these figures into our models, as explained below.


• How do we get an output?

After retrieving the day's information, we begin the internal process by factoring new data into our adaptive machine learning models. Machine learning allows for the computer to note patterns or alignments in the data that we as humans may be missing. In particular, the process maps inputs to outputs, adjusts parameters with partial derivatives, and exposes important sequences in the set. An informative figure is shown below to illustrate the concept of machine learning. in short, it allows the machine to find patterns in our data that we might be missing, and constantly updates parameters to adjust for new developments in the outbreak.

Afterward, we store all data onto our blockchain, which contains a "block" of information for every day, including the net outputs and all given inputs. The blockchain allows for our data to be stored securely and immutability, and efficiently links information by recorded date, an important figure to maintain order. An informative figure of this process is shown below.

And that's our process! Don't let the silicon valley jargon scare you -- on the inside, much like an algorithm, it's all easier to understand. If you're ever interested in learning more about our tech, feel free to contact us in the about section!


How it all works.