At Mentomy we build enterprise AI based on the 9 fundamental pillars of Responsible AI, fully complying with the European Data Protection Regulation (EU GDPR)
and the European AI Regulation (EU AI Act) from the architectural core and in every step we take.
Artificial Intelligence only generates sustainable value when it is safe, ethical, transparent and controllable. At Mentomy, we understand that trust is the absolute foundation when organizations integrate AI into critical business operations.
Our private AI platform for European businesses is built on a comprehensive Responsible AI framework that not only complies with regulations such as GDPR and the European AI Act, but exceeds them through an architectural design where responsibility is the guiding principle behind every technical, commercial and ethical decision.
European companies face unique challenges: strict and evolving regulations, high privacy expectations from customers and employees, and the critical need to maintain sovereignty over sensitive strategic data.
Mentomy is not just another AI platform โ we are the first enterprise AI solution designed specifically for the European market, where privacy, security, transparency and ethics are not optional features or marketing, but the fundamental technical and legal architecture that guarantees your long-term success.
This policy describes with technical precision and absolute transparency how we implement each of the 9 pillars of Responsible AI, strategically ordered by their critical importance to protecting your organization.
Before diving into technical and legal details, let us explain in clear terms what our platform does.
Imagine your company has an extremely intelligent librarian who has read all your internal documents: manuals, procedures, policies, reports, contracts, technical guides.
When you ask a question like "What is our vacation policy?" or "How do I configure the production server?", this librarian:
That is Mentomy. But instead of a human, it is an Artificial Intelligence system that works 24/7, responds in seconds, and works EXCLUSIVELY with your organization's information.
โ Not ChatGPT for businesses: ChatGPT knows general things from the internet. Mentomy only knows YOUR documents.
โ It does not invent answers: If the information is not in your documents, Mentomy says so clearly instead of speculating.
โ It does not learn from your data: Your conversations and documents are NEVER used to improve third-party AI models (OpenAI, Google, etc.).
โ It does not share anything: Your data is hermetically isolated. Other companies using Mentomy cannot see a single character of your information.
For those who want to understand the technology behind Mentomy, here is the technical detail explained clearly.
Mentomy uses an architecture called RAG that combines two technologies:
Step by Step: What Happens When You Ask Something
1. Document Upload (What you do once):
numerical vector (embedding)2. When You Ask a Question:
numerical vector3. Response Generation:
"Answer ONLY based on these documents. If it's not there, say so clearly"Pure LLMs (like standard ChatGPT): They respond based on billions of words from the internet that they "memorized" during training. They can invent plausible but false information (hallucinations).
RAG (like Mentomy): It first searches YOUR real documents, then formulates the response based ONLY on what was found. "Hallucinations" are reduced to near zero because the AI cannot cite what does not exist.
Result: Responses that are verifiable, accurate and grounded in your corporate reality, not in generic internet information.
Your Data, Always Confidential โ The Absolute and Unbreakable Foundation
Privacy is not an additional feature in Mentomy; it is the architectural foundation of our entire platform. We understand that your business data is your most valuable and critical asset. Every component, every line of code, every technical decision is designed with privacy as the absolute and unbreakable guiding principle.
Mentomy fully complies with the General Data Protection Regulation (EU) 2016/679, implementing privacy by design and by default throughout the platform architecture.
Each client operates in a dedicated workspace that is hermetically isolated at the physical and logical infrastructure level. Your sensitive data is never mixed, crossed, or shared with that of other clients. Isolation guaranteed through multi-level tenant isolation.
We only process data strictly necessary to provide the service (Art. 5.1.c GDPR). We do not collect additional information, unnecessary metadata, or usage data beyond what is technically indispensable. Data minimization as a fundamental engineering principle.
Mentomy NEVER uses your data to train proprietary or third-party foundational models. Your data is used exclusively to power your own private AI, under your total control. Absolute contractual and technical guarantee.
When you delete a file or folder, it is erased permanently and immediately from all our servers (data source server, client database server, and vector database server).
Administrators and users can access and modify their personal information available in Mentomy, as well as access the interactions and queries sent to the AI engine (chat history).
Administrators and users can immediately and without undue delay correct inaccurate, incomplete, or outdated data through real-time administration interfaces.
Administrators can permanently, completely and verifiably delete user personal data when legitimately requested.
Administrators and users can completely and verifiably delete the AI system memory (chat memory).
Administrators will be able to export data in standard structured formats (Microsoft Excel, xlsx) and widely used formats for direct transfer to other systems without technical or legal obstacles.
Administrators and users can legitimately object to the processing of their data for certain specific purposes, including processing based on legitimate interest or for direct marketing purposes.
Administrators can temporarily limit how personal data is processed while queries, disputes, or accuracy verifications are being resolved, with clear marking of restricted records.
We maintain a robust and documented incident response protocol specific to privacy with designated teams and tested procedures. In the unlikely event of a personal data breach that presents a risk to rights and freedoms, we will notify the competent supervisory authority within 72 hours of becoming aware, in accordance with Article 33 of the GDPR.
If the breach presents a high risk to rights and freedoms, we will directly notify the affected data subjects without undue delay (Art. 34 GDPR), with clear communication about the nature, consequences, measures taken, and contact details of the DPO.
Multi-Layer Military-Grade Protection for Critical Assets
The security of your data is our top operational priority. We implement enterprise-level cybersecurity measures at every layer of the infrastructure: from network to application, through databases, APIs and development processes. Defense in depth as an unbreakable core strategy.
We implement appropriate technical and organizational measures in accordance with Article 32 GDPR to guarantee security of processing appropriate to the level of risk, including confidentiality, integrity, availability and resilience of systems and services.
Files uploaded by administrators are stored in a private Google Cloud Storage (GCS) bucket located in the European Union (Madrid), protected by granular IAM per tenant, AES-256 encryption at rest and VPC Service Controls
AI-generated embeddings are stored in Pinecone Serverless (EU region), in independent indexes per tenant to ensure multi-level isolation. Pinecone provides: data encryption at rest (AES-256), TLS 1.3 end-to-end, API key-based access control and protection against malicious vector injection through prior validation.
Operational information and metadata โ users, queries, logs, permissions โ are stored in a
MySQL/PostgreSQL database managed in Google Cloud SQL with:
Native AES-256 encryption at rest
TLS 1.3 encryption in transit
Per-instance isolation and private VPC
Automatic backups
Strict firewall rules (only accessible from authorized services)
KMS-managed key rotation
The architecture applies isolation at multiple layers:
Logical isolation of files in GCS per company
Complete isolation of Pinecone indexes (dedicated index per tenant)
Records segmented by company in the SQL database
Access via corporate email per tenant, preventing cross-access
From the user's browser to GCS, Pinecone and Cloud SQL, all data travels and is stored protected with HTTPS, TLS 1.3, AES-256 and strict integrity validation (HMAC). Mentomy never transmits unencrypted data or uses insecure channels.
All files are processed in memory within the backend on GCP and are subject to:
malicious format detection,
limits on file size, number of files and knowledge base storage space
sanitization and structural validation
anti data-poisoning control before generating embeddings
This prevents manipulated documents from affecting the system or contaminating the vector database.
Mentomy maintains strict alignment with internationally recognized security frameworks and actively works toward formal certifications: ISO/IEC 27001:2022 (Information Security Management), SOC 2 Type II (security, availability, confidentiality, processing integrity controls), compliance with ENS (Spanish National Security Scheme RD 311/2022) for public administrations, and alignment with NIST Cybersecurity Framework.
100% European Union โ Your Data NEVER Leaves Europe
For European companies, geographic control of data is not optional โ it is fundamental for compliance, digital sovereignty and protection against extraterritorial legislation. Mentomy guarantees 100% data residency in the European Union with no exceptions, no ambiguities, no fine print. Your data stays on European soil, always and forever.
By maintaining data exclusively in the EU, Mentomy completely eliminates the risks and complexities of international transfers, avoiding requirements for safeguard mechanisms such as SCCs, BCRs or adequacy decisions that may be judicially invalidated (Schrems II case).
Your entire knowledge base and associated data are hosted exclusively on physical servers located within the European Union. Currently: Google Cloud Storage and Google SQL database located in Madrid, Spain (Europe-Southwest1), vector database hosted on Google Cloud Platform in Dublin, Ireland (Europe-West-1), and AI engine hosted on Amazon Web Services in Frankfurt, Germany (eu-central-1).
Your data NEVER crosses EU borders under any circumstances. No transfers to the USA (CLOUD Act does not apply), China (national intelligence law), or any third jurisdiction. Not even for backups or disaster recovery (DR sites also in the EU).
All computational calculations, embedding indexing, LLM queries, AI operations, and data processing are executed on infrastructure physically located in European territory. Not even temporary processing outside the EU.
By maintaining data exclusively in the EU, we completely eliminate the complexities of Standard Contractual Clauses (SCCs Article 46.2.c), Binding Corporate Rules (BCRs Article 47), Transfer Impact Assessments (TIAs), or international transfer mechanisms that have been judicially invalidated or challenged.
Architecture aligned with the requirements of the National Security Scheme (Royal Decree 311/2022, formerly RD 3/2010) applicable to public administrations and Spanish public sector providers that require national infrastructure and sovereignty over classified information.
Design taking into account the requirements of the AI Regulation (EU) 2024/1689 from the architecture phase, including transparency on processing location (Art. 13), auditable logs stored in the EU (Art. 12), and cooperation with European supervisory authorities (Art. 70-73).
Consult with us for regulations specific to critical sectors: Healthcare (MDR 2017/745, IVDR 2017/746, e-health directives), Finance (MiFID II, DORA, PSD2, EBA guidelines), Insurance (Solvency II), Telecommunications (ePrivacy), Defense (national classification).
Mentomy offers a European digital sovereignty option against US providers (Google, Amazon, Microsoft, OpenAI) subject to the CLOUD Act, FISA 702, Executive Order 12333, or Chinese providers under the National Intelligence Law that may be compelled to provide data to foreign governments.
Careful selection of all service providers (LLM, cloud, databases, CDN) with rigorous evaluation of: (1) contractual commitments to not use client data for training, (2) processing and storage location, (3) applicable legislation and jurisdiction, (4) potential governmental access.
If you decide to use your own server for data storage, we can fully integrate with your data with appropriate fees and without artificial friction.
As a European company headquartered in Spain, we contribute to the development of the European AI technology ecosystem, reducing dependence on foreign technology monopolies, creating skilled jobs in Europe, and complying with digital sovereignty principles of the European Digital Strategy.
Total Openness โ No Black Boxes, No Surprises, No Fine Print
We believe in radical transparency about how our technology works, how we process data, and what limitations exist. In a world where many AI providers operate as impenetrable black boxes, we choose absolute clarity. Technical, commercial and ethical transparency as a fundamental competitive differentiator.
We implement comprehensive transparency in accordance with the European AI Act, providing clear and understandable information about the functioning, capabilities, limitations and appropriate use of the AI system to users and affected persons.
Detailed technical explanations (data flow diagrams, architecture diagrams) that clearly document how data flows: from initial upload โ preprocessing โ chunking โ embedding generation โ vector indexing โ storage โ retrieval โ generation โ response delivery.
Total transparency on the exact physical location of data. Currently: Google Cloud Storage and Google SQL database located in Madrid, Spain (Europe-Southwest1), vector database hosted on Google Cloud Platform in Dublin, Ireland (Europe-West-1) and AI engine hosted on Amazon Web Services in Frankfurt, Germany (eu-central-1).
We respond to client questions with total and brutal honesty, even when the answer is not what they expect or what commercially benefits us. We openly admit limitations, errors and areas for improvement. Long-term trust is more valuable than a short-term sale.
Clients can request additional information about any aspect of their data processing, architecture decisions, security configurations, or internal system functioning at any time, receiving a technically detailed documented response (except core trade secrets that protect intellectual property).
When we make mistakes (technical such as critical bugs, operational such as unplanned downtime, communication such as incorrect documentation), we acknowledge them publicly without excuses, explain the root cause with a detailed post-mortem, and communicate corrective measures implemented in a transparent and timely manner.
Contracts written in language understandable to non-lawyers, without unnecessary confusing legal jargon. Policies explained in real business terms with concrete examples. Plain language legal documents as a principle, complemented by formal legal versions when necessary for enforceability.
Completely clear cost structure: storage costs, token costs, and API call costs if applicable, with no hidden charges, no billing surprises, no unilateral increases without prior notice (90 days). Cost calculators available pre-contract for precise estimation.
Mandatory advance notification (minimum 30 days for material changes, 90 days for significant changes in pricing/terms) of changes to the platform, policies, terms of service or pricing structure, with opt-out available where legally applicable and clear explanation of impacts.
Simple, straightforward cancellation procedure without dark patterns: self-service in dashboard, effective at the end of the billing period, without abusive contractual lock-in periods (maximum annual commitment negotiable), with complete data export guaranteed 30 days before final account closure.
Understandable AI โ Every Response Has Verifiable Sources and Auditable Reasoning
AI decisions should not be mysterious and impenetrable black boxes. We provide comprehensive technical transparency about how responses are generated, with complete traceability to specific source documents. At Mentomy, you can always understand, verify, audit and question exactly why the AI responded in a certain way.
We provide meaningful explanations about the logic applied in AI responses, in accordance with the spirit of the GDPR on automated decisions and specific requirements of the AI Act on explainability for high-risk systems and technical documentation.
With each response, Mentomy provides the original data sources (files) used to generate the response. Users can click and open those data sources directly, verifying interpretation accuracy and understanding nuances that the AI may have simplified in its synthetic response.
In the visualization of data sources (currently PDF documents), we automatically highlight with a distinctive color the specific sections and phrases that the AI used directly to construct its response, allowing for quick visual verification of citation accuracy and fidelity to the original source without excessive paraphrasing.
We display confidence/similarity metrics (cosine similarity scores 0-1) that mathematically indicate how relevant the retrieved data source is to the query made, allowing objective confidence evaluation. The score can have values from 0 to 1, with 1 being the highest possible relevance/match.
For each response, the AI returns an explanation/justification of why that response was given and how it was reached. For example: "This response is based on section 1.4 titled 'Vacation Policy' which states that the number of vacation days is 22."
Administrators can review the complete query history of users in their organization, filtered for a date range. This report includes: the exact date and time of the interaction, the responsible user, the questions asked, and the AI-generated responses. This information is downloadable to Microsoft Excel (xlsx).
Administrators can review the complete history of user interactions in their organization on Mentomy. Available reports include: demographics (access within a time range), interaction (session duration), and feedback ('Like' or 'Dislike' button). This information is downloadable to Microsoft Excel (xlsx).
You Maintain Total Control โ AI is Your Tool, Not Your Master
AI must always be a tool that you completely control, never an autonomous system operating without human supervision. At Mentomy, the human is always the final decision-maker. We implement granular controls that guarantee the AI operates exactly according to your specifications, permissions and organizational policies.
We implement robust human oversight mechanisms in accordance with the requirements of the European AI Act for high-risk systems, ensuring that natural persons can effectively understand, intervene and supervise the AI system.
Define precise roles (Admin / User) with specific and granular permissions for accessing folders, documents, features and APIs. Your AI only responds with information that the user is explicitly authorized to see according to the configured permissions matrix.
Organize your knowledge base in hierarchical structures for different teams, active/archived projects, confidentiality levels (public, internal, confidential, secret), or specific business needs with permission inheritance and override.
Permissions can be modified, suspended or revoked instantly with immediate effect (<1 second) on AI capabilities without latency or propagation windows. Critical for cases of employment termination, role change, or security incidents.
You define customized retention policies: how long data is kept (days, months, years), when they are automatically archived to cold storage, when they are permanently deleted.
When deleting files or folders, they are permanently and instantly erased from all our servers.
You can temporarily pause or completely deactivate your AI instance whenever you wish without contractual penalties, with the ability to later reactivate while maintaining all settings, data and permissions exactly as they were. Useful for audits or organizational changes.
At Mentomy, you are the absolute and indisputable owner of your AI system. We do not make unilateral decisions about your configuration, your data or your policies without your prior explicit consent. The AI facilitates information, provides suggestions and data-based recommendations, but never makes critical business decisions nor acts autonomously without your direct supervision and informed consent.
This principle of human control supremacy is unbreakable in our technical, contractual and ethical architecture, in accordance with Article 14 of the AI Act which requires effective human oversight for high-risk systems.
Comprehensive Ethical Framework โ AI Aligned with Fundamental Human Values
Beyond minimum legal compliance, we are guided by deep ethical principles that guarantee our AI benefits society and respects fundamental human dignity. Ethics is not a cosmetic surface layer added at the end โ it is the philosophical foundation that guides our technical design, our product decisions and how we relate to clients, employees and society.
Our ethical framework aligns with international principles established by multilateral organizations and recognized ethical authorities, committing to development and deployment of AI that respects fundamental human rights, dignity, autonomy, and social well-being.
Mentomy's AI is a tool for assisting and augmenting human capabilities, NOT a replacement for human decision-making in significant matters. Users always maintain final control and decision-making authority over important decisions. We do not design AI to manipulate, coercively persuade, or exploit human cognitive or emotional vulnerabilities.
We proactively commit to preventing our technology from causing harm to individuals, vulnerable groups or society in general. Mandatory systematic evaluation of ethical risks before launching significant new features. Explicit special consideration of potential impacts on vulnerable populations (children, elderly, people with cognitive disabilities).
We genuinely aspire to generate positive net value for European society, not just short-term private commercial benefits. Democratization of access to knowledge within organizations. Contribution to European digital sovereignty by reducing dependence on foreign monopolies. Optimization of computational resource use to minimize carbon footprint and environmental impact.
We assume explicit moral responsibility and, when legally applicable, for the behavior of our AI systems and their foreseeable consequences. Clear channels for reporting ethical problems. Formal protocol for rapid investigation and correction of identified ethical issues. Contractual clarity on the division of legal responsibilities between Mentomy and end-user clients.
Mentomy does not facilitate, technically enable, or participate in any way in indiscriminate mass surveillance systems of populations, intrusive social scoring in the style of social credit systems, or authoritarian control of citizens that violates fundamental rights to privacy and freedom. We actively reject use cases that we consider ethically problematic even if they are technically legal.
We do not design features specifically oriented toward excessive invasive monitoring of employees that violates labor dignity or workplace privacy. Use of AI systems for HR management must be transparent to workers, consented to when legally required, and respect fundamental labor rights including protection against algorithmic discrimination in hiring/firing/promotion.
We implement technical controls and educate clients to respect copyright of content in knowledge bases. We proactively guide clients on appropriate legal use of copyright-protected material, fair use limitations, and infringement risks. We do not knowingly facilitate piracy or systematic violation of IP rights.
Active and continuous technical work to prevent our systems from perpetuating, amplifying or algorithmically encoding existing social discriminations present in historical input documents or cultural contexts. This includes detection of discriminatory language in responses, biases in ranking, and recommendations that may result in unjustified unfavorable treatment.
Absolute contractual prohibition on using the platform to: generate discriminatory content or hate speech, create deliberate disinformation at scale (disinformation campaigns), produce non-consensual deepfakes with malicious intent, facilitate harmful psychological manipulation, or any use that incites violence, harassment or harm against individuals or protected groups.
We honestly communicate which complex ethical problems we have not yet fully resolved satisfactorily, what inevitable ethical trade-offs we face (e.g., transparency vs. confidentiality, individual vs. group fairness), and what areas require more research and innovation. We do not claim to have perfect answers to all ethical dilemmas of advanced AI.
Accuracy, Truthfulness and Robustness โ AI That Works Even Under Pressure
We obsessively strive to ensure that AI responses are accurate, verifiable, grounded in real data, and that the system operates reliably even under adverse conditions. Truthfulness means rigorous grounding in sources. Robustness means stable and predictable operation under load, partial failures, or manipulation attempts.
We implement technical measures to guarantee appropriate accuracy, robustness and cybersecurity of the AI system in accordance with the AI Act, including exhaustive testing, continuous validation, and the ability to operate reliably and safely throughout its lifecycle.
All responses are constructed EXCLUSIVELY based on documents from the client's knowledge base using RAG (Retrieval-Augmented Generation) architecture, dramatically reducing the typical hallucinations of pure LLMs that generate plausible but fabricated information without factual grounding. Mandatory grounding in retrieved context.
Every factual claim in responses includes a specific explicit reference to a source document with a direct hyperlink that the user can click and review immediately to verify accuracy, full context, and detect erroneous interpretations or excessive simplifications by the AI. Zero claims without documentary support.
Confidence metrics rigorously based on quantitative similarity scores between query and retrieved documents (cosine similarity in embedding space), objectively indicating how confident the AI is about the accuracy of its response: High confidence (>0.85), Medium (0.7-0.85), Low (<0.7 - user warning to verify carefully).
The AI is specifically instructed through system prompts and behavior fine-tuning to honestly admit when it does NOT have enough information in the knowledge base, instead of inventing or speculating with fabricated responses. Phrases such as "I did not find specific information about...", "The available documents do not cover...", "I would need more context to..."
Technical design specifically optimized to minimize hallucinations: robust retrieval with reranking (BGE cross-encoder), strict context grounding in prompts (explicit instructions: "Answer ONLY based on provided context"), system prompts that severely penalize ungrounded generations, and post-processing validation that detects inconsistencies between response and context.
At Mentomy we carry out testing on a periodic basis. To do this, we use manually curated benchmarks with known ground truth to evaluate accuracy, relevance, coherence, comprehensiveness, and absence of hallucinations in responses over time and system updates.
Systematic periodic review by domain-specific human experts of responses generated in critical use cases (compliance, legal, finance, medical, security) with structured feedback through quantitative rubrics, identifying problematic edge cases and feeding improvements to prompts and retrieval.
UI-integrated mechanisms (thumbs up/down, report, detailed comments) for end users to report incorrect, incomplete, biased, or problematic responses, with a human review process that investigates root cause, corrects the specific issue, and iteratively feeds continuous improvements to the system.
Automated continuous tracking of critical KPIs: average retrieval relevance (target >0.80), rate of responses without sufficient sources (target <5%), end-to-end latency (<10s), user satisfaction (CSAT survey), with automatic alerts upon significant degradation.
Fair and Equitable AI โ No Bias, No Discrimination, No Favoritism
Fairness is essential to avoid biased decisions or systematic unjust treatment. We actively commit to ensuring our AI treats all users and use cases equitably, without discrimination based on protected characteristics, without unjustified algorithmic biases, and without artificially favoring certain outcomes over others.
We implement technical and organizational measures to prevent unlawful discrimination and promote fairness in accordance with AI Act requirements on non-discrimination, testing with representative data, and bias mitigation that could result in unjustified unfavorable treatment of individuals or groups.
We work exclusively with LLM providers (Mistral AI) that demonstrate documented and verifiable public commitments to comprehensive bias mitigation in their mass training processes (toxic data filtering, RLHF with diverse raters, extensive red teaming), continuous evaluation (fairness benchmarks such as BBQ, BOLD), and transparency about known limitations.
By exclusively using the knowledge base provided and curated by the client (corporate documents, internal policies, professional knowledge bases) through RAG architecture, we significantly reduce the risks of unwanted external biases introduced by massive internet training datasets that inevitably contain historical social biases.
We conduct scheduled periodic tests (monthly) with deliberately diverse sets of use cases: different industries (tech, healthcare, finance, legal), query types (factual, analytical, creative), and when relevant to the domain, scenarios covering varied demographics, identifying and correcting biased behaviors through prompt adjustments, retrieval, or models.
Systematic expert human + automated (NLP bias detection tools) review of generated responses to detect potential bias patterns in: language used (gendered language, stereotypes), unjustified recommendations or prioritizations, over-representation of certain perspectives vs others, with correction through prompt engineering or fine-tuning when systematic biases are detected.
We openly and honestly document known biases inherent in the LLM foundational models used (e.g., linguistic biases toward English, over-representation of Western perspectives, asymmetric performance across different demographics) and their limitations in certain cultural, linguistic, or domain-specific contexts, without claiming to have completely "solved" the bias problem in AI.
Our RBAC (Role-Based Access Control) system is founded exclusively on legitimate and functional organizational criteria (job role, department, project, confidentiality level), NEVER on personal characteristics protected by law (race, gender, religion, sexual orientation, national origin, age, disability). Architecture-level guarantees against discrimination.
We support multiple major European languages (Spanish, English, French, German, Italian, Portuguese) with equivalent response quality tested through multilingual benchmarks, avoiding linguistic barriers in access to critical business information and promoting inclusion of non-native workers in the dominant corporate language.
The AI does not artificially favor certain documents, privileged sources or responses over others except by objective technical relevance measured algorithmically through similarity scores in vector space. There is no manual "boosting" of certain content, authors or departments. Pure ranking based on mathematical semantic relevance.