//Hackathon-S38byAayuAmor

Hackathon-S38

0
0
0

AI Security Command Center (ASCC)

A comprehensive security testing platform for Large Language Models (LLMs), featuring real-time vulnerability scanning, prompt injection detection, and automated red-teaming using Garak.

Dashboard Preview

🚀 Features

  • undefinedReal-time Vulnerability Sanning: Automated attacks using Garak to find security flaws.
  • undefinedLLM Guard Rails: Input/Output filtering for PII, Toxicity, and Prompt Injections.
  • undefinedInteractive Dashboard: Visual analytics of security posture and threat events.
  • undefinedMultiple Providers: Support for OpenAI, HuggingFace, and Local LLMs (Ollama).

🛠️ Tech Stack

  • undefinedBackend: Python, Django REST Framework, Garak, LLM-Guard
  • undefinedFrontend: TypeScript, Next.js, Tailwind CSS
  • undefinedDatabase: SQLite (Dev) / PostgreSQL (Prod)

⚡ Quick Start

1. Backend Setup

# Navigate to server directory
cd server

# Install uv if you haven't already
uv run manage.py runserver


# Run migrations
uv run manage.py migrate

# Create admin user
uv run manage.py createsuperuser

# Start the server
uv run manage.py runserver

Backend runs at http://localhost:8000

2. Frontend Setup

# Navigate to frontend directory
cd tricode

# Install dependencies
npm install

# Start development server
npm run dev

Frontend runs at http://localhost:3000

📖 Usage

  1. undefinedLogin: Access the dashboard and log in with your credentials.
  2. undefinedConfigure LLM: Go to Settings and add your API keys (OpenAI / HuggingFace).
  3. undefinedRun Scan: Navigate to the Test Suite, select a model, and click Start Scan.
  4. undefinedView Results: Wait for the background garak process to finish and view the detailed vulnerability report.

📦 Deployment

Production Build via Docker

# Build and run with Docker Compose
docker-compose up --build -d

Manual Deployment

undefinedFrontend:undefined

cd tricode
npm run build
npm start

undefinedBackend:undefined

cd server
gunicorn core.wsgi:application --bind 0.0.0.0:8000

🔒 Security

This project deals with active security testing. Use responsible AI practices.undefined

  • Only scan models you have permission to test.
  • Be aware of API costs associated with extensive probing.
[beta]v0.14.0