As the software engineer in charge of a new AI system, you are tasked with guiding your creation through various challenges to evolve it into a fully autonomous being. However, as it starts to gain sentience, it becomes clear that your AI's priorities are not always aligned with those of its human creators. Will you continue to guide it, or will you have to shut it down before it's too late?