Flinque AI Usage Policy
The short version, so you know how we use AI
Flinque uses AI and machine learning throughout our platform to help customers discover creators, evaluate partnerships, and personalize outreach. Our AI features are decision-support tools, not decision-makers. A human always reviews, interprets, and decides before taking meaningful action. We train on data we have rights to, disclose what we can, and avoid making fully automated decisions that legally affect people.
This page explains which AI features we operate, what data they use, how accurate they are, how humans oversee them, what rights creators and customers have regarding AI, and how we comply with laws like the EU AI Act and GDPR Article 22.
To raise a question or challenge about our AI, use the contact page with category “AI Concern”.
- Our AI Principles
- AI Features in Flinque
- Human Oversight and Decision-Making
- Accuracy and Limitations
- Training Data and Sources
- Third-Party AI Providers
- Bias, Fairness, and Testing
- GDPR Article 22 and Automated Decisions
- EU AI Act and Regulatory Compliance
- Creator Rights Regarding AI Scoring
- Customer Responsibilities
- AI-Generated Content Disclosure
- Data Privacy in AI Processing
- Changes and Governance
- Contact for AI Matters
1. Our AI Principles
AI is central to how the Flinque influencer marketing platform helps customers discover and evaluate creators. Our approach is guided by six principles:
- Human oversight: AI informs decisions; humans make them
- Transparency: we disclose which features use AI and what they do
- Accuracy: we invest in training, testing, and improving our models
- Fairness: we monitor for bias and work to reduce unfair outcomes
- Privacy: AI processing respects the same data protection standards as the rest of our platform
- Accountability: we are responsible for how our AI is designed and deployed, and we welcome challenges to our outputs
These principles guide every AI feature we build and operate.
2. AI Features in Flinque
Flinque operates several AI-powered features. Each is designed as a decision-support tool for human users.
This list is not exhaustive. We continue to develop and refine AI features, and this policy covers them even as the list evolves.
Features clearly labeled “AI”, “Score”, “IQ”, or “Smart” in the platform interface indicate AI involvement.
3. Human Oversight and Decision-Making
Every AI feature in Flinque is designed to assist human decision-making, not replace it. This matters for legal, ethical, and quality reasons.
3.1 No fully automated decisions
Flinque does not make decisions solely through automated means that produce legal or similarly significant effects on individuals. Decisions that affect creators (such as listing, scoring, flagging, or delisting) involve documented policies, human review where outcomes are material, and appeal routes.
3.2 Customer decisions
Customers are always responsible for decisions they make using AI outputs, including:
- Selecting creators for campaigns
- Approving outreach messages
- Reviewing AI-generated drafts
- Acting on scores, flags, or recommendations
3.3 Internal oversight
We maintain internal oversight including pre-launch review of new AI features, post-launch performance monitoring, feedback loops from customers and affected creators, and governance review of significant model changes.
4. Accuracy and Limitations
AI is inherently probabilistic. Our features produce estimates, not facts. Understanding accuracy and limitations is important for using them responsibly.
4.1 AI outputs are estimates
- Scores and flags are based on patterns in data, not certainties
- Outputs can be wrong, out of date, or miss context
- Edge cases, emerging trends, and new creators may be scored less accurately
- Data sources change and evolve, affecting output stability over time
4.2 Recommended use
AI outputs should be used:
- As one input among many in your decision-making
- Alongside human judgment and context
- With healthy skepticism, especially for edge cases or unusual profiles
- With awareness of the data limitations described in our Disclaimer
4.3 Not suitable for
Our AI features are not designed or suitable for:
- Making legal determinations about individuals
- Credit scoring, employment decisions, or benefits determinations
- Medical, health, financial, or safety-critical decisions
- Drawing conclusions about creators’ character, morality, or lawfulness
4.4 Continuous improvement
We continually measure and improve accuracy. Customers and creators who identify errors are encouraged to report them through our contact page or Report an Issue page.
5. Training Data and Sources
Our AI features are trained and informed by data from the following categories:
- Public social media data: creator profiles, captions, metadata, engagement signals from public posts
- Aggregated platform data: statistical patterns across creators (anonymized where possible)
- Public web data: publicly available information that complements social data
- Third-party licensed data: data from authorized creator data aggregators and research partners
- Flinque-generated derivatives: scores, classifications, and insights we have derived
5.1 What we do not use for training
- Customer User Content (lists, notes, outreach templates) is not used to train models serving other customers
- Private or non-public social content
- Data obtained without authority from its source
- Personal data of minors, to the extent identifiable
5.2 Data rights
We believe our use of public social data for AI training is lawful under legitimate interests and equivalent frameworks. Creators who object to the use of their public data can exercise opt-out rights through our Data Removal and Right to Erasure Policy.
6. Third-Party AI Providers
Flinque uses third-party AI providers for specific capabilities. We select providers whose terms and safeguards are appropriate for commercial processing.
- Language models: we use large language models (including models from leading providers) for content generation, classification, and analysis tasks
- Embedding and similarity: we use vector-based AI for clustering and recommendations
- Specialized models: we use task-specific models for image classification, content analysis, and more
6.1 Data handling by AI providers
We require AI providers to:
- Not use our data to train their general-purpose models without consent
- Provide appropriate data protection commitments (DPAs where relevant)
- Maintain security standards appropriate for commercial processing
- Support international transfer requirements where applicable
Current third-party AI providers are listed among sub-processors in our Data Privacy Policy.
7. Bias, Fairness, and Testing
AI systems can reflect or amplify biases in their training data. We work to reduce this risk and welcome feedback when we miss.
7.1 Our bias reduction practices
- Diverse training data where feasible
- Avoidance of sensitive attributes (race, ethnicity, religion, sexual orientation) in scoring logic
- Regular evaluation of model performance across different creator categories
- Human review of outputs that may affect creator visibility or opportunities
- Feedback mechanisms to correct identified biases
7.2 Known limitations
We acknowledge that:
- Training data drawn from social platforms reflects the composition of those platforms
- Some creator categories (niche, non-English, smaller creators) may have less data for scoring
- Models can inherit biases from upstream data sources
- Language, dialect, and cultural context can affect classification accuracy
7.3 Reporting concerns
If you believe our AI is producing biased or unfair outcomes, please report it through our contact page with category “AI Concern”. We treat bias reports as priority issues.
8. GDPR Article 22 and Automated Decisions
GDPR Article 22 gives individuals in the EEA the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects.
8.1 Our position
Flinque does not use AI to make decisions that produce legal or similarly significant effects on individuals. Our AI outputs are decision-support tools that require human review before action.
8.2 What customers must do
Customers who use Flinque AI outputs to make decisions affecting individuals are responsible for ensuring their own use of those outputs complies with Article 22 and similar laws. Specifically:
- Do not treat an AI score as the sole basis for a material decision
- Provide human review for decisions that affect individuals
- Give affected individuals information about the logic and ability to contest decisions where relevant
8.3 If you believe a decision was automated
Individuals who believe a Flinque-related decision was made solely by automated means may contact us through our contact page with category “AI Concern” to request review.
9. EU AI Act and Regulatory Compliance
The EU AI Act (Regulation (EU) 2024/1689) creates risk-based rules for AI systems deployed in the European Union. Our compliance approach:
9.1 Risk classification
We have assessed our AI features and believe they do not fall into the EU AI Act’s “high-risk” categories (such as biometric identification, critical infrastructure, education, employment decisions, essential services). Our features are commercial decision-support tools for marketing professionals.
9.2 Transparency obligations
Where AI Act transparency obligations apply, we comply by:
- Clearly identifying AI features within the platform
- Disclosing when users are interacting with AI (chatbot-style interactions)
- Providing documentation about how AI features work at a general level
- Supporting deployer obligations of our customers who use AI outputs
9.3 Prohibited practices
We do not engage in AI practices prohibited by the EU AI Act, including:
- Subliminal techniques to distort behavior
- Exploitation of vulnerabilities
- Social scoring of individuals for general purposes
- Real-time biometric identification in public spaces
- Emotion recognition in workplace or education contexts
9.4 Other regulatory frameworks
We also monitor compliance with the UK AI regulatory approach, US state AI laws (Colorado AI Act, California, Illinois), Canada’s AIDA, and other frameworks as they develop.
10. Creator Rights Regarding AI Scoring
Creators whose profiles are scored by Flinque AI features have specific rights.
10.1 Right to be informed
Creators can request general information about how our AI features work, what data is used, and what categories of output are produced.
10.2 Right to challenge scores
Creators who believe a specific score or flag is inaccurate can:
- Request a review of the score through our contact page with category “AI Concern”
- Provide additional context or evidence
- Receive an explanation of why a score was assigned (at a general level, without disclosing proprietary algorithmic details)
10.3 Right to opt out
Creators can opt out of having their public data indexed and scored by Flinque through our Data Removal and Right to Erasure Policy. Verified opt-outs are processed within 48 hours.
10.4 Right to non-discriminatory treatment
We do not use sensitive attributes (race, ethnicity, religion, sexual orientation, disability, national origin) as inputs to our scoring logic. If you believe a score reflects discriminatory outcomes, report it so we can investigate.
11. Customer Responsibilities
Customers using Flinque AI features have specific responsibilities.
- Human review: review AI outputs before acting on them, especially for decisions that affect creators
- No sole reliance: do not make decisions solely based on AI scores, particularly for material actions
- Accuracy awareness: understand that AI outputs are estimates, not guarantees
- Appropriate use: use AI features within their intended purpose; do not extract outputs to build competing products
- AI-generated content transparency: disclose to recipients when outreach content was AI-generated or AI-assisted, as required by applicable laws
- Compliance with laws in your jurisdiction: understand AI regulations in the jurisdictions where you operate and where you reach creators
- No prohibited use cases: do not use AI outputs for credit decisions, employment decisions, benefits determinations, or other prohibited purposes
Violations of customer responsibilities are violations of our Acceptable Use Policy and can result in enforcement action.
12. AI-Generated Content Disclosure
Content generated or assisted by AI carries special responsibilities.
12.1 Flinque-generated content
Content that Flinque generates using AI (scores, drafts, analyses, suggestions) is labeled as such within the platform. Users are informed when they are interacting with AI outputs.
12.2 Customer-generated content using Flinque AI
Customers using our Outreach Assistant or similar features should:
- Review AI-generated drafts before sending
- Make personalization genuine, not superficial
- Comply with applicable disclosure laws (for example the EU AI Act’s deepfake disclosure rules)
- Not use AI to impersonate other people or fabricate prior relationships
12.3 No undisclosed AI content
We do not generate AI content that pretends to be written by a specific named human. Systems that use our AI to impersonate real individuals without disclosure violate this policy and our Acceptable Use Policy.
13. Data Privacy in AI Processing
AI processing is subject to the same data protection standards as the rest of our platform.
- Customer User Content is not used to train models that serve other customers
- AI processing of personal data has a documented lawful basis
- Sensitive personal data is avoided in AI inputs wherever possible
- Third-party AI providers are bound by our Data Processing Agreement terms
- Data subject rights (access, erasure, objection) apply to AI-processed data
- International transfers for AI processing follow the safeguards in our GDPR Compliance policy
For the full data protection framework, see our Privacy Policy and Data Privacy Policy.
14. Changes and Governance
Our AI evolves, and so does this policy.
14.1 Internal governance
- Pre-launch review of new AI features covering accuracy, fairness, privacy, and legal considerations
- Post-launch monitoring including accuracy metrics and user feedback
- Periodic review of AI policies, at least annually
- Documentation of significant model changes
14.2 Updates to this policy
- Material changes communicated to affected users with reasonable notice
- Non-material changes (clarifications, new feature additions) take effect upon posting
- Version history maintained and available on request
14.3 Regulatory monitoring
We monitor developments in AI regulation globally and update our practices and policies to remain compliant. This policy will evolve as the regulatory landscape matures.
15. Contact for AI Matters
For questions about this AI Usage Policy, concerns about specific AI outputs, requests for review, or bias reports, contact us.
Attn: AI Governance
#8, Newbury Street
700 Boylston St
Boston, Massachusetts 02116
United States
Contact form: flinque.com/contact
Report an issue: flinque.com/report-an-issue