Trust Center

Last updated: March 13, 2026

Librainian by Totem is an AI-powered metadata management platform for enterprise digital asset libraries. This page describes how we build, secure, and govern our AI systems.

1Our Approach to AI Governance

Librainian uses artificial intelligence to generate metadata: titles, descriptions, keywords, alt text, and custom fields for images, videos, and documents stored in your digital asset management system. Our approach is built on three principles:

Human OversightAI-generated metadata is reviewable before application. Auto-push is available for trusted workflows but is always opt-in and configurable per library.
Domain SpecificityWe use specialized prompt strategies tailored to different asset types, rather than a one-size-fits-all approach.
Defense in DepthSafety controls operate at multiple layers — AI instructions, post-processing validation, and configurable thresholds.

2How Our AI Works

When an asset is submitted for enhancement, Librainian selects a specialized prompt strategy based on the asset type and client configuration. The asset file is sent to an AI model for analysis, and the AI returns structured metadata in JSON format. This metadata then passes through a multi-stage validation pipeline before it is presented to the user.

Specialized Prompt Strategies

Each strategy includes tailored AI instructions, domain-specific validation rules, and output constraints. The Default and Stock strategies share the same configuration:

DefaultStockProductArchitectureDocumentDocument (Native PDF)EventFashionFoodNatureArtMedicalSportsAutomotiveMarketingHeadshot (GDPR-focused)LogoIcon

No training on customer data. Your assets are processed ephemerally by third-party AI models for metadata generation only. Your files and metadata are never used to train or fine-tune any AI model.

3Safeguards Against Harmful Output

We apply context-appropriate safeguards to prevent harmful, biased, or inappropriate metadata. The level of filtering is determined by the use case — strategies designed for sensitive contexts (employee portraits) enforce strict characteristic filtering, while strategies designed for commercial photography (stock, fashion) allow the descriptive metadata that makes assets findable.

Context-Appropriate Filtering

Where a use case requires it, safety constraints are enforced at two independent stages:

  • Layer 1 — AI Instructions: System-level prompts instruct the AI model on what it must never describe, infer, or reference. These constraints receive elevated priority in the model's processing.
  • Layer 2 — Post-Processing Validation: After the AI responds, programmatic validation scans the output for restricted terms and removes or replaces them before metadata is presented to the user. This acts as a safety net even if the AI instruction layer is imperfect.

For commercial photography strategies (Stock, Fashion, Event, etc.), descriptive people metadata is permitted because searchability by visual attributes is essential to DAM workflows and model releases typically exist. Organizations requiring stricter filtering for specific libraries can configure additional metadata restrictions.

GDPR-focused Headshot Processing

Our Headshot strategy is specifically designed for employee and professional portrait photography — contexts where model releases for AI categorization typically do not exist and GDPR Article 9 "special category" protections apply. It enforces absolute constraints against describing or inferring protected characteristics:

Restricted categories:

Age, apparent age, gender, gender expression, race, ethnicity, skin tone, hair color, hair texture, body type, weight, height, disability, religion, religious attire, sexual orientation.

Permitted focus areas:

Attire type, background setting, composition, lighting quality, image resolution, and professional presentation.

Objectivity Enforcement

For brand and logo assets, our Logo strategy removes subjective marketing language (such as "elegant," "stunning," or "premium") and enforces factual descriptions of observable visual elements: colors, shapes, text content, and layout.

Confidence Thresholding

AI-generated metadata includes confidence scores. Results below a configurable threshold (default: 70%) are automatically excluded. Thresholds can be set globally per library or individually per metadata field. When re-processing an asset, new values only overwrite existing values if the new confidence score is strictly higher.

Per-Client Configuration

Metadata restrictions, prompt strategies, confidence thresholds, and field update behaviors are all configurable per client and per library. This allows us to tailor the system to each organization's specific policies and compliance requirements.

4Data Handling & Security

What We Process

  • Asset files (images, videos, PDFs) are sent to AI providers for analysis and metadata generation.
  • Existing metadata (filenames, folder paths, prior tags) may be included as context to improve accuracy.
  • AI-generated metadata (title, description, alt text, tags, custom fields) is returned and stored.
  • Asset files are not retained by AI providers after processing.
Technical Security Measures
Encryption at RestAES-256 via cloud provider (Supabase)
Encryption in TransitTLS for all connections
AuthenticationSupabase JWT + Google OAuth
Data IsolationRow Level Security on all tables
API ProtectionRate limiting on all endpoints
Payment SecurityStripe PCI-compliant processing

AI Provider Data Practices

We use third-party AI models (Google Gemini and OpenAI) for metadata generation. Under these providers' API terms of service, data submitted via their APIs is not used to train or improve their models. Your assets are processed in real-time and are not stored by the AI providers beyond the duration of the API request.

5Privacy & Data Protection

We follow Privacy by Design principles — data protection is built into our architecture, not added as an afterthought. We minimize the data we collect, isolate it by tenant, and give users control over their information.

GDPR (EU General Data Protection Regulation)

We process personal data only for the purpose of providing the service. Our architecture implements data minimization, purpose limitation, and storage limitation principles. A Data Processing Agreement (DPA) is available for enterprise clients. Where assets contain images of people, our GDPR-focused Headshot strategy prevents inference of protected characteristics.

Data subject rights supported: Access, rectification, erasure, portability, and restriction of processing. Users can manage their data directly in the platform or contact us for assistance.

CCPA / CPRA (California Consumer Privacy Act)

We do not sell personal information. California residents have the right to know what data we collect, request deletion, and opt out of any sale of personal information. Our privacy practices align with CCPA/CPRA requirements for service providers.

UK Data Protection Act 2018 (UK GDPR)

Our GDPR-focused practices extend to UK data protection requirements. We apply the same data minimization, security, and data subject rights protections regardless of jurisdiction.

6AI Governance & Compliance Frameworks

EU AI Act

Our AI metadata generation system is classified as minimal risk under the EU AI Act. It does not fall into any Annex III high-risk category — it processes digital content (images, documents, videos), not biometric data, and does not make decisions about individuals. Where assets contain images of people (e.g., headshots), we apply GDPR-focused strategies that explicitly prevent inference of protected characteristics.

We voluntarily implement transparency measures that exceed minimal-risk requirements: users are informed that metadata is AI-generated, all output is reviewable before application, and confidence scores are visible.

NIST AI Risk Management Framework

Our development practices align with the four core functions of the NIST AI RMF:

  • Govern: Configurable prompt strategies and AI models per client, admin-only configuration, tiered access control.
  • Map: Domain-specific strategies identify and address risks per asset type (e.g., headshot processing → GDPR constraints).
  • Measure: Confidence thresholds, multi-stage validation, tag deduplication, length enforcement, custom field option validation.
  • Manage: Restricted terms filtering, human review before application, field update strategies, audit logging of AI operations.
ISO/IEC 42001 (AI Management Systems)

We are not currently ISO 42001 certified. Our practices — including AI model governance, confidence-based quality controls, restricted-term filtering, per-client configuration, and audit logging — align with the standard's intent. We are evaluating formal certification as we scale our enterprise client base.

7Accessibility

Accessibility is a core function of our platform, not an afterthought. Librainian automatically generates WCAG-focused alt text for images, helping organizations meet accessibility standards across their digital asset libraries at scale.

  • AI-generated alt text follows WCAG 2.1 Level AA guidelines — concise, descriptive, and limited to 125 characters for screen reader compatibility.
  • Alt text generation is available across all prompt strategies, ensuring accessibility metadata is produced for every asset type.
  • Organizations can use Librainian to retroactively add alt text to existing asset libraries that lack accessibility metadata.

8Subprocessors

The following third-party services process data on our behalf:

ProviderPurposeData Shared
Google (Gemini API)AI metadata generationAsset files, text prompts
OpenAIAI metadata generation (fallback)Asset files, text prompts
SupabaseDatabase, authentication, file storageAll platform data
StripePayment processingCustomer and subscription identifiers
Google Cloud PlatformWorker compute (Cloud Functions)Job payloads
FrontifyDAM integration (client-directed)Asset metadata

Questions About Our Security Practices?

We welcome inquiries from prospective and current clients about our security posture, AI governance, and compliance practices. Detailed documentation is available on request.

Librainian by Totem Agency

Librainian's consumer self-service platform is operated by Starbright Lab LLC (US), our technology operations partner. Enterprise agreements are entered through Totem Agency.

Contact Us
© 2026 Totem Agency. All rights reserved.