GESHDO

GESHDO Consultancy — Official Policy Document

AI Acceptable Use Policy

Effective Date: February 2026 · Version 1.0 · Classification: Internal

1. Purpose

This policy establishes guidelines for the acceptable use of Artificial Intelligence (AI) tools and services by all employees, contractors, and partners of GESHDO Consultancy. It aims to ensure that AI technologies are used responsibly, ethically, and in compliance with applicable laws, regulations, and client agreements.

2. Scope

This policy applies to all personnel who use AI tools in the course of their work at GESHDO, including but not limited to:

  • Code generation and completion tools (e.g., GitHub Copilot, Cursor)
  • Conversational AI assistants (e.g., ChatGPT, Claude, Gemini)
  • AI-powered image and content generation
  • AI-enhanced productivity tools (e.g., Notion AI, Grammarly)
  • Any other AI service that processes work-related data

3. Data Classification

Before using any AI tool, you must classify the data you intend to process:

Level Classification AI Tool Use
L1 Public / Open Source ✓ Permitted
L2 Internal / Non-Sensitive ⚠ With Approved Tools Only
L3 Client Data / Confidential ⚠ Requires Manager Approval
L4 PII / Regulated Data ✗ Prohibited

4. Approved Tools

Only tools listed on the Approved Tools Registry may be used for work purposes. The registry is maintained by the engineering leadership team and is reviewed quarterly.

4.1 Tool Selection Guidelines

Employees must select the appropriate tool for the task at hand, considering the Usage and Strengths/Weaknesses listed in the registry.

  • Context Awareness: For complex, multi-file refactoring, prioritize tools with deep codebase indexing (e.g., Cursor, Antigravity).
  • Data Privacy: For client-specific work, ensure the tool supports zero-retention mode (e.g., Copilot Business, Tabnine Enterprise).
  • Cost Efficiency: Use token-based agents (e.g., Claude Code) judiciously for high-value tasks, rather than routine boilerplate generation.

5. Prohibited Activities

The following activities are strictly prohibited:

  1. Uploading client source code to unapproved AI tools
  2. Processing personally identifiable information (PII) through any AI tool
  3. Using AI-generated code in client projects without review and attribution
  4. Sharing API keys, credentials, or secrets with AI assistants
  5. Using free-tier AI services that may train on your inputs
  6. Disabling privacy or telemetry controls on approved AI tools
  7. Representing AI-generated content as entirely human-authored work

6. Code Review Requirements

All AI-generated or AI-assisted code must undergo standard code review processes before being merged into any codebase. Reviewers should pay particular attention to:

  • Security vulnerabilities and injection risks
  • Licensing compliance of suggested code patterns
  • Accuracy and correctness of generated logic
  • Adherence to project coding standards
  • Potential hallucinations or fabricated API references

7. Incident Reporting

If you suspect that sensitive data has been inadvertently shared with an AI tool, report it immediately to your team lead and the IT security team. Time is critical in these situations — do not wait for your next standup.

8. Acknowledgment

All employees are required to read, understand, and acknowledge this policy annually. Your continued use of AI tools in the workplace constitutes acceptance of these terms. Questions or concerns should be directed to the engineering leadership team.

Last reviewed: February 2026 · Next review: August 2026 · Policy owner: Engineering Leadership · Document ID: GESHDO-AI-POL-001