In a world where technological advancements, particularly in artificial intelligence (AI), are rapidly unfolding, developers and technology companies face the challenge of making ethically sound decisions to benefit consumers.
Addressing this crucial issue, a new handbook titled “Ethics in the Age of Disruptive Technologies: An Operational Roadmap” has been released, providing guidance on ethical considerations surrounding the use of AI chatbots like ChatGPT.
Published on June 28, the handbook marks the inaugural output of the Institute for Technology, Ethics, and Culture (ITEC), a collaborative effort between Santa Clara University’s Markkula Center for Applied Ethics and the Vatican’s Center for Digital Culture.
Also Read
- Robots At AI Forum Expect Increase In Numbers To Tackle Global Issues, Reassuring No Job Stealing Or Rebellion
- UN Secretary-General Calls For Guardrails In AI Development Grounded In Human Rights, Transparency, And Accountability
- US Military Explores Large-Language Models For Data Integration And Decision-Making
“While there’s no mechanism to make decisions,” said Father Brendan McGuire, a former tech industry professional and now a Catholic priest, “we knew that each of these companies are global companies, so, therefore, they wouldn’t really respect a pastor or a local bishop. I said, if we could get somebody from the Vatican to pay attention, then we could make some traction.”
The Vatican’s involvement was seen as a natural step due to its diplomatic, cultural, and spiritual influence.
Bishop Paul Tighe, who served as the secretary of the Dicastery for Culture and Education at the Vatican, was entrusted by Pope Francis to address digital and tech ethical issues.
“We’re co-creators with God when we make these technologies,” McGuire continued, recognizing that technology can be used for both good and bad purposes.
The handbook resulted from years of informal collaborations between the Markkula Center and the Vatican, with the establishment of the ITEC initiative in 2019 formalizing the partnership.
To gather insights and research, the Vatican organized the conference “The Common Good in the Digital Age” in 2019, which attracted industry leaders and experts.
“We’ve done our best to make it as usable and practical as possible and as comprehensive as possible,” explained co-author Ann Skeet, senior director of leadership ethics at the Markkula Center. “What’s important about this book is it puts materials right in the hands of executives inside the companies so that they can move a little bit past this moment of ‘analysis paralysis’ that we’re in while people are waiting to see what the regulatory environment is going to be like and how that unfolds.”
The release of the handbook coincides with growing calls for regulatory frameworks and ethical considerations in the field of AI.
The European Parliament recently passed the draft AI Act, which would impose restrictions on facial recognition software and require AI creators to disclose more about their program’s data usage.
Similarly, policy proposals for AI testing and privacy protection have been put forth by the White House in the United States.
“AI and ChatGPT are the hot topic right now,” said Skeet. “Every decade or so we see a technology come along, whether it’s the internet, social media, the cellphone, that’s somewhat of a game-changer and has its own inherent risks, so you can really apply this work to any technology.”
As leaders in the AI field, including Sam Altman of OpenAI, Microsoft President Brad Smith, and Google CEO Sundar Pichai, call for regulation and ethical standards, the “Ethics in the Age of Disruptive Technologies” handbook provides valuable guidance for companies, enabling them to proactively navigate ethical challenges and make informed decisions in an evolving tech landscape.