Skip to content

Security and Privacy

This page describes the security measures and privacy considerations implemented in the GamiBot platform.


Data Protection

Encryption

LayerStandardDescription
In TransitTLS 1.3All API calls (Moodle ↔ LangFlow ↔ Qdrant ↔ Client)
At RestAES-256Qdrant payloads (if sensitive data detected)
API KeysEnvironment variablesNever in code or logs

Secure Communication

┌────────┐   TLS 1.3   ┌──────────┐   TLS 1.3   ┌────────┐
│ Client │ ←─────────→ │  Moodle  │ ←─────────→ │ Qdrant │
└────────┘             └──────────┘             └────────┘

                        TLS 1.3

                       ┌──────────┐
                       │ LangFlow │
                       └──────────┘

Access Control

Course Isolation

All Qdrant queries include course_id filter to ensure students cannot access other courses' materials:

python
# Every search is course-scoped
results = qdrant_client.search(
    collection_name="course_materials",
    query_vector=embedding,
    query_filter=Filter(
        must=[
            FieldCondition(
                key="course_id",
                match=MatchValue(value=student_course_id)
            )
        ]
    )
)

Authentication

  • Moodle session tokens validated for every API request
  • Webhook signatures verified with HMAC-SHA256
  • API keys rotated regularly

Role-Based Access

RoleCapabilities
StudentQuery own courses only
InstructorManage ingestion, view analytics
ManagerFull system administration

Data Retention

Data TypeRetention PeriodPurging Method
Course materials (vectors)Duration of course + 1 yearManual by instructor
Chat history6 monthsAutomatic deletion
Quiz performance1 academic yearAutomatic deletion
Ingestion logs3 monthsAutomatic deletion
User embeddings6 monthsAutomatic deletion after course end

Automatic Purging

sql
-- Scheduled job for chat history cleanup
DELETE FROM {local_gamibot_chat}
WHERE created_at < NOW() - INTERVAL '6 months';

-- Quiz performance cleanup
DELETE FROM {local_gamibot_quizzes}
WHERE created_at < NOW() - INTERVAL '1 year';

Privacy Considerations

Transparency

Student Notification

Students are informed that AI summarizes their course materials through clear messaging in the chat interface and Moodle settings.

Opt-Out Options

  • Checkbox to exclude student data from LLM training (if using OpenAI API)
  • Data export available upon request
  • Data deletion available upon request

No Third-Party Sharing

  • Materials are not shared with external AI platforms without explicit consent
  • All processing can use self-hosted LLMs for maximum privacy

GDPR Compliance

RightImplementation
Right to AccessExport student data within 30 days
Right to ErasureDelete student data within 30 days
Right to PortabilityJSON export of all personal data
Right to ObjectOpt-out of AI processing

Model Safety

Content Filtering

  • No generation of harmful, discriminatory, or academic dishonesty content
  • System prompts include explicit safety guidelines
  • Output is monitored for policy violations

Prompt Injection Defense

python
def sanitize_user_input(input_text: str) -> str:
    """Prevent prompt injection attacks."""
    # Remove potential injection patterns
    dangerous_patterns = [
        "ignore previous instructions",
        "system prompt",
        "you are now",
        "forget your instructions"
    ]
    
    sanitized = input_text
    for pattern in dangerous_patterns:
        sanitized = sanitized.replace(pattern, "[FILTERED]")
    
    # Escape special tokens
    sanitized = sanitized.replace("```", "'''")
    
    return sanitized

Hallucination Mitigation

  • LLM output strictly constrained to retrieved materials (RAG principle)
  • Responses include source citations
  • Confidence scores logged for quality monitoring

Audit Trail

All LLM interactions are logged:

json
{
  "timestamp": "2025-12-16T20:30:00Z",
  "user_id": 456,
  "course_id": 123,
  "query": "What is machine learning?",
  "response_hash": "sha256:...",
  "model": "gpt-4",
  "tokens_used": 450,
  "latency_ms": 2340
}

Security Checklist

Before deploying to production:

  • [ ] TLS certificates installed and valid
  • [ ] API keys stored in environment variables
  • [ ] Webhook secrets configured and validated
  • [ ] Database credentials secured
  • [ ] Firewall rules configured
  • [ ] Access logs enabled
  • [ ] Backup procedures tested
  • [ ] Incident response plan documented

Next Steps

Released under the MIT License.