Cantellus Group Year in Review and Fall Look-Ahead

Each year brings renewed and heightened focus on emerging technologies – along with novel questions about how we should be managing and governing them. This year has been no different, with a slew of legislative and regulatory activity around AI, corporate initiatives, and of course an uptick in in-person gatherings and events to discuss best practices and common themes when it comes to Responsible AI (RAI) and technology governance.  

Along those lines, here I highlight some key Cantellus Group initiatives to date, as well as some activities we are spearheading this fall – all of which intersect and advance the AI regulatory and governance ecosystem.  

Early this year, we helped the Business Roundtable draft and finalize the cornerstone documents for its Responsible AI Initiative: a Roadmap for Responsible AI and an accompanying set of Recommendations for Policymakers. The BRT AI Initiative is the first of its kind led by public company CEOs. The Roadmap builds on broadly accepted principles for Responsible AI, but importantly, goes further to spell out tactical and immediate practices that business can adopt wherever they are on their AI governance journey. The accompanying Policy Recommendations set out high-level recommendations for policymakers to consider as they are drafting the rules for AI governance. We continue to support BRT as it develops the Initiative. Across our work, we observe that there is strong interest among policymakers to engage with the business community and AI practitioners to understand actionable practices when it comes to AI governance.     

In April, we launched a unique service offering: the Interim C[X]O. Against the backdrop of emerging regulation and an increasingly complex landscape of technology risks, we see growing demand for C-level attention on a variety of digital issues that may currently be handled in separate departments or business units across the enterprise. Technology governance – whether from a strategic or compliance perspective – implicates AI policy and practice, as well as (at a minimum): privacy and data governance, cybersecurity, corporate governance, national security and social responsibility, DEI and ESG. Bringing coherence to the range of this work, often across corporate silos or domains, requires distinct effort. We are helping organizations design for this, resource internal talent, and support certain functions in the interim.

In June, we co-organized and facilitated a CRAFT workshop with Kathy Baxter, Principal Architect of Ethical AI at Salesforce, at the fifth annual convening of ACM FAccT, the Association for Computing Machinery’s hallmark conference on Fairness, Accountability, and Transparency. Our workshop covered emerging problems in the field of Responsible AI today: Concepts of Fairness and Transparency, Applied RAI Practices, Organizational Approaches to RAI and Cultural Change, and Public Policy and Regulation, and was widely attended by about 30 AI ethics practitioners, researchers, and students. Several key themes emerged from the workshop, among them that significant challenges remain in how to evaluate, monitor, and repair models and that creating a mature RAI culture requires commitments and resources like those for security, privacy, or accessibility. 

In July, we had several opportunities to share our perspectives with key policy and governance communities. We presented to a core group of Responsible AI practitioners at a workshop on Implementing Responsible AI at the Markkula Center for Applied Ethics about the challenges of managing partners and 3rd parties in the AI ecosystem.  

I also joined the Future of Privacy Forum (FPF) for two policy briefings: first to discuss the AI governance implications of the initial draft rulemaking for the California Privacy Rights Act (CPRA) issued by the California Privacy Protection Agency and later to present an overview of Section 207 of the American Data Protection and Privacy Act (ADPPA) moving through the US House. While the latest CPRA regulations have not yet provided much-anticipated guidance on new rules for automated decision systems (ADS), many of the proposed regulations to date will impact how companies working with AI must handle and label data as well as manage relationships with the various parties with whom they may be sharing that data, including any 3rd party model assessors or auditors. The ADPPA includes a key section on Civil Rights and Algorithms, which would create relatively more robust AI regulatory requirements in legislation to date.  We have been monitoring these critical policy initiatives and engaging with clients and stakeholders to understand how these new proposals (and others) in the AI and privacy space will significantly impact AI and data-intensive businesses. 

In addition to our work in AI policy, we have also been engaging with several traditionally non-technical audiences anticipating the importance of AI and technology governance, including the education and corporate governance communities. In April, our CEO and Founder Karen Silverman moderated a panel with technology leaders on how we should prepare students to navigate an increasingly complex digital ecosystem in and out of school, and the questions around ethics and inclusion when it comes to EdTech. As the market for EdTech and distributed learning expands, organizations will need to think proactively about embedding equity, accessibility, and of course, the responsible use of data, at the center of their offerings. In addition to the EdTech community, we also shared our perspectives on AI and technology governance in June with the Society for Corporate Governance at their National Conference in June. Karen led another panel, this time on AI Ethics, Governance and Bias, and discussed with an audience of corporate secretaries how technology governance matters are increasingly relevant to their work in corporate responsibility and ESG. Finally, she moderated a panel at CogX with Dame Wendy Hall and Dr. Kate Devlin on “Regulating the Metaverse: Can We Govern the Ungovernable?” While tangible metaverse applications may seem futuristic, many of the concerns regarding how data is processed, what is reasonable or acceptable in digital spaces versus real life, and how technology is used (or manipulated) to generate certain experiences remain unaddressed.  

Just last week, Where Lawyers Meet Tech and Tech Meets the Law, a new Luminate+ series co-developed by the Cantellus Group and hosted by Karen Silverman, went live! Intended to inform in-house lawyers about the legal and policy implications of emerging technologies, we share and distill insights from leading technology law practitioners on topics from governance to national security, identity and authenticity, privacy, and the near-future metaverse. The series confronts the many ways that AI and other frontier technologies will impact the substance and practice of the law.   

It has been a busy year to date – and we are not slowing down. From the metaverse, to emerging technology regulation, to new coalitions and initiatives around best practices for AI assessments, we are at the forefront of these discussions and stand ready to contribute our expertise. 

Coming up, we are co-facilitating an upcoming workshop on Responsible Innovation Best Practices with Salesforce in mid-October, moderating a Practicing Law Institute program in NYC on Think Like a Lawyer, Talk Like Geek, and sponsoring the Scaling AI Summit in San Francisco in late November, with a panel on proactively preparing for and pre-determining AI regulation. As technologies advance and the ethical questions and policy landscape get increasingly complex, we are here to help you take your first step on the journey of technology governance. Stay tuned for more developments, events, and updates through the fall.

 

Written by Chloe Autio. Chloe is an Advisor and Senior Manager with Cantellus Group.

These blogs by TCG Advisors express their views and insights. The strength and beauty of our team is that we encompass many opinions and perspectives, some of which will align, and some which may not. These pieces are selected for their thoughtfulness, clarity, and humor. We hope you enjoy them and that they start conversations!