AI is increasingly becoming a general-purpose technology that permeates various facets of our lives and work environments. The rapid advancement in AI capabilities leads to two significant conclusions. First, as we progress, humans are set to permit AI to augment many of our decisions, allowing machines to assist us in areas where they can add value. Second, and perhaps more critically, there will be a growing trust in AI systems to act autonomously across numerous contexts without requiring real-time human input. Take, for instance, the realm of human resources. Many HR departments now rely on AI to autonomously screen job applications submitted through online platforms. This practice allows for a faster and often more efficient vetting process, as recruiters increasingly defer to AI systems for preliminary decision-making. Similarly, consider Google's innovative approach to managing its data centers. Initially, AI was employed to suggest optimal actions to human operators. However, the technology has evolved to the point where it now autonomously manages control systems, demonstrating trust in its decision-making processes. Additionally, the emergence of AI-driven hedge funds exemplifies this trend. Fund managers establish broad parameters for investment strategies but increasingly entrust AI to make independent buy and sell decisions, entirely devoid of real-time input from humans. As we look toward the near future, it is clear that AI's capabilities will surpass human performance in making many more decisions across various sectors. This increasing autonomous capability raises profound moral dilemmas, especially in contexts where decisions carry significant consequences. Society must grapple with these ethical quandaries; without systematic and comprehensive exploration of such issues, we risk either chaos or stagnation in AI deployment. A prominent example is the evolution of autonomous vehicles. Cars are progressing from basic driving aids, like lane-keeping assistance, to fully self-parking systems and, eventually, complete autonomy. These advancements usher in the potential for increased safety on the roads. However, each incident involving an autopilot that leads to a fatal accident incites widespread media outrage. In stark contrast, the annual death toll of approximately 36,000 people in the U.S. from human-driven accidents hardly garners the same level of public concern. This discrepancy can be attributed to research findings suggesting that society tends to accept human imperfections more readily than machine errors, often viewing these as a consequence of human fallibility rather than system failure. Now, envision a scenario in the near future where cars are 100% safe and fully autonomous, making no mistakes. Logically, one would advocate for the preference of such vehicles due to their life-saving capabilities. However, the moral dilemmas surrounding their deployment may become even more intricate. A variation of the well-known trolley problem illustrates this issue: suppose it's 2027 and you are traveling alone in an impeccably safe autonomous car. Suddenly, an unforeseen obstacle—a heavy object—drops from a truck ahead. The AI faces a critical decision: should it plow through the obstacle, endangering your life, veer right into a minivan with five elderly passengers, or swerve left toward a sedan carrying two individuals? What is the morally correct action? Would the dilemma shift based on the ages of those involved? If you are a 25-year-old passenger, how does that impact the decisions made concerning a minivan filled with older individuals versus a sedan with younger occupants? While people typically embrace the spontaneous, unplanned decisions made by a human driver in such circumstances, we hold machines to a different standard. Choices made by AI are scrutinised through a lens of design decisions, where an algorithmic choice might be perceived as favouring one life over another. This complexity presents significant challenges for engineers creating these systems. What moral frameworks should they incorporate into the AI's decision-making processes? As a CEO, what ethical guidance should one provide during the design and implementation phases of such technologies? These questions highlight the reality that we face not merely technical engineering challenges but profound moral inquiries that necessitate collaboration among engineers, philosophers, lawmakers, and regulators who embody the values and beliefs of society as a whole. Reflecting on potential applications of AI within your organisation prompts further consideration of current and future moral dilemmas faced by leadership. What ethical challenges are present today, and how might they evolve over the next five years? In navigating these complexities, companies must prepare for strategic discussions that encompass not only technological innovation but also the ethical ramifications of their choices in an increasingly automated future.