Experiments & Evaluations Index

This repository serves as an index of experimental projects, evaluations, proof-of-concepts, templates, patterns, and exploratory ideas related to AI/LLM development and workflows.

Last updated: 06/04/2026

Experiments & Evaluations Index

Repository Index

This repository serves as an index of experimental projects, evaluations, proof-of-concepts, templates, patterns, and exploratory ideas related to AI/LLM development and workflows.

About this Index

This collection brings together various experimental repositories exploring AI agent workflows, LLM capabilities, evaluation frameworks, and development patterns. These repositories represent hands-on experiments, proof-of-concepts, benchmarking efforts, and reusable templates for AI-driven development.

Quick Reference: Evaluations

Quick Reference: Experiments

AI Agent Development

Agent Workflows & Patterns

Development Templates

LLM Evaluation & Benchmarking

LLM Capabilities & Testing

LLM Experiments

Hugging Face Spaces

Speech-to-Text & Audio Processing

STT Benchmarks & Evaluation

Hugging Face Spaces

Hugging Face Datasets

Audio Samples & Resources

Audio Processing Experiments

Image Generation & Visual AI

Image Generation Evaluation

Specialized Applications

Multi-Agent Simulations

OSINT & Intelligence

Data Analysis

Testing & Documentation

Test Repositories

Related Subindexes

Note: This is a focused index covering experimental AI/LLM development projects. For a higher-level collection of all repository indexes and other projects, see the GitHub Master Index.

danielrosehill/Github-Master-Index View on GitHub

Author

Daniel Rosehill Contact: public@danielrosehill.com Website: danielrosehill.com

Proof of Concepts — AI Self-Ideation

Proof of Concepts — Context & Interview Workflows

Proof of Concepts — Multi-Agent Panels & Decision Making

Proof of Concepts — Research & Report Generation

Proof of Concepts — Other

Additional Evaluations

Additional Experiments

Data Visualization