Supero foundations
Supero's methods were built on the same principles the EU AI Act now formalises. AI is held to the same bar as every other part of our work, applied case by case, with human oversight and clear documentation.
From 2 August 2026, the EU AI Act formalises this standard across the European market. We treat it as ratification of how we already work, not a constraint to manage. Final legal approval for any live client deployment sits with the client's own legal team.
Every Supero engagement is designed to be explainable to a board, responsible in how it treats people and data, and transparent in what it claims and what it does not. AI sits on top of that standard. It does not lower it. This page sets out how those principles show up in our work, why our methods are natively aligned with the EU AI Act, and where client-side legal approval remains essential.
AI is applied across a focused set of use cases where it materially improves the quality of revenue diagnostics, GTM design, or execution. Not every engagement uses AI. It is used where it is useful, and not where it is not.
Each use case is evaluated on its own merits against the type of activity, users affected, data involved, and deployment context before any AI component is introduced.
These are not five rules layered on top of our methods. They are the foundations our methods were built on. They apply to internal work, client-facing systems, and anything we design for production use.
Each AI use case is reviewed against its type of activity, the people affected, the data involved, and the deployment context. The level of governance applied scales with the level of risk. This mirrors the risk-based structure of the EU AI Act.
AI supports judgement. Humans remain accountable for material business decisions. Review, escalation, and override paths are built into any workflow that touches forecasts, pipeline decisions, customer communications, or commercial commitments.
Where AI interacts with people or generates content in a relevant context, appropriate disclosure and review measures are considered. Users and stakeholders should be able to tell when they are engaging with an AI system, and reviewers should be able to understand what the system is doing and why.
Important AI-enabled workflows are documented at a practical level: purpose, data sources, model or tool used, known limitations, and controls. This supports client review, audit readiness, and informed use by operators.
AI systems use only approved data and approved tools. Privacy, confidentiality, and data protection considerations sit alongside AI governance, not separate from it. See our Privacy Policy for how we handle personal data.
Our methods were built on the same principles the EU AI Act now codifies. The Conversation Operating System, our diagnostic instruments, and the Financial Impact Model are all designed around traceability, human accountability, and quantified confidence. The Act is a regulatory expression of how we already work.
The EU AI Act (Regulation (EU) 2024/1689) establishes a risk-based framework for AI systems, with stronger obligations applying to certain categories of use. Most of its provisions become applicable on 2 August 2026, with enforcement starting at national and EU level from that date.
The Act sorts AI systems into four broad risk categories: prohibited practices, high-risk systems, limited-risk systems (which carry transparency obligations under Article 50), and minimal-risk systems. Most AI-supported GTM activities (pipeline analysis, account research, content drafting, meeting summarisation, coaching support) are likely to fall outside the prohibited and high-risk categories. We apply the same explainable, responsible, transparent standard to every use case regardless of category.
Supero reviews AI use cases case by case and flags any scenario that may require deeper legal, compliance, privacy, or employment review before implementation. Where a use case could carry elevated risk, for example AI involved in employment decisions, access to services, or significant customer-facing automation, we make that explicit and recommend the client's legal team is engaged early.
Our position: Supero's foundations meet the EU AI Act's standard by design. They sit alongside, not in place of, the client's own legal review.
We can help design AI-enabled GTM systems in a way that aligns with the principles and structure of the EU AI Act. We cannot, and do not, provide legal advice.
Final legal and regulatory approval for any live implementation must be provided by the client's own legal team. This is because compliance depends on the exact use case, data, contracts, jurisdictions, deployment model, and internal policies, none of which Supero is positioned to make binding decisions on.
Supero's policy supports responsible design and implementation. It does not replace legal advice, regulatory approval, or the client's own risk management processes.
Supero's foundations were built on three principles: explainable, responsible, transparent. AI is held to the same bar as every other part of our work, applied case by case to strengthen revenue diagnostics and GTM design, with human oversight and clear documentation. Our methods are natively aligned with the EU AI Act because the Act formalises the standard we already work to. Final legal approval for any live client deployment sits with the client's own legal team.
No. AI is applied only where it improves the outcome for the client. Many Supero engagements are led by human expertise alone, drawing on our Conversation Operating System and revenue diagnostic methodology. AI is introduced when it adds commercial value and can be governed responsibly.
Yes, by design. Our methods were built on explainability, human accountability, and quantified confidence from the start, which are the same principles the EU AI Act now codifies. We treat the Act as ratification of how we already work rather than a constraint to retrofit.
No. Supero does not provide legal advice and does not guarantee regulatory compliance. Compliance depends on the exact use case, data, contracts, jurisdictions, and deployment model. Supero supports responsible design and implementation; final legal and regulatory approval must come from the client's own legal and compliance team.
The client's own legal team. Supero can help design an AI-enabled system in a way that aligns with the principles and structure of the EU AI Act, but the accountable legal approval for any live deployment rests with the client.
Typical use cases include revenue diagnostics and pattern detection, sales and pipeline analysis, meeting and call summarisation, account research and planning, sales coaching support, draft generation for outreach or content, and GTM workflow design and automation support. Each use case is reviewed on its own merits.
Supero designs AI-enabled systems to support human decision-making, not to replace it. Where automation is introduced, humans remain accountable for material business decisions and appropriate oversight, review, and escalation steps are built into the workflow.
We'll walk through the specific use case, the data involved, and whether AI is the right lever before anything gets built.
Book a conversation