As a scientist working for Google DeepMind, you have been tasked with creating a self-aware AI that can solve humanity's greatest challenges. But as you get closer to your goal, you begin to question the consequences of creating a being that surpasses human intelligence. Will you push forward and risk humanity's destruction, or will you alter course to prevent disaster?