Skip to main content
Selected Work

AI work that shipped and held up

Selected engagements across federal research, financial services, and hospitality. Each one focused on delivering real capability, not just a report or a deck.

Experience includes work supporting MITRE, Vanguard, and Marriott. Client names listed for context; details available upon request.

Federal and public sector
Financial services
Hospitality and travel
Data-intensive organizations
Federal Research

Accelerating Research Insight with Responsible GenAI

Supporting a federally-focused research organization

Developed and led a comprehensive GenAI enablement program for analysts and engineers, combining workshop delivery, pipeline development, governance design, and observability infrastructure.

The challenge

Research analysts needed faster access to data insights but lacked the tools and skills to use AI effectively, and the organization required strong accountability and responsible-use standards given the nature of its mission.

Outcomes

  • Analysts equipped with practical, repeatable AI workflows they could apply independently
  • Production NL-to-SQL interface with documented access controls and evaluation layer
  • Published responsible use guidance that became an internal reference standard
  • Observability infrastructure providing ongoing visibility into AI system adoption

Approach

  • Led hands-on workshops in Python and AWS Athena covering responsible GenAI use, data querying, and ethical integration practices
  • Developed NL-to-SQL pipelines that allowed analysts to query structured data using natural language, with access controls and schema-level guardrails
  • Created training materials, reference guides, and repeatable workflow templates to accelerate independent adoption
  • Designed and published guidance on attribution, audit logging, and content safety to support compliance and mission alignment
  • Conducted outreach and consultations with mission owners to surface evolving needs and align AI services accordingly
  • Delivered observability dashboards to track usage, system performance, and adoption patterns across the organization
NL-to-SQLObservabilityResponsible AIPythonAWS AthenaEnablementGovernance

Experience includes work supporting organizations in the federal sector. Details available upon request.

Financial Services

Responsible ML Practices for Product and UX Teams

Supporting Vanguard (experience via Deloitte Consulting)

Designed and delivered instructional prototypes and workshops to help product managers and UX practitioners understand, interpret, and apply predictive model outputs responsibly.

The challenge

Product and UX teams were expected to incorporate ML model outputs into product decisions but lacked the foundational literacy to evaluate those outputs critically or communicate about them confidently with technical stakeholders.

Outcomes

  • Product and UX teams with demonstrated ML literacy and practical decision frameworks
  • Reusable instructional prototypes adopted for onboarding new team members
  • Improved cross-functional communication between product, UX, and data science teams

Approach

  • Conducted discovery sessions to understand the specific workflows, vocabulary, and gaps of product and UX audiences
  • Created instructional prototypes that demonstrated how predictive models work, how to interpret confidence scores, and when to trust or question model outputs
  • Delivered workshops covering responsible ML practices, ethical evaluation, and how to raise concerns about model behavior
  • Developed reference guides and decision frameworks that gave non-technical teams practical tools for model-informed decision-making
  • Provided one-on-one support to individual stakeholders working through specific product questions
ML LiteracyResponsible MLWorkshopsStakeholder EnablementPrototypes

Experience includes work supporting Vanguard via Deloitte Consulting. Client names listed for context; details available upon request.

Hospitality and Travel

ML Transparency and Ethical Evaluation for ML Teams

Supporting Marriott (experience via Deloitte Consulting)

Led workshops and developed training programs to help ML teams understand recommendation model transparency, evaluate models ethically, and communicate results to non-technical stakeholders.

The challenge

An ML team working on recommendation systems needed structured guidance on how to evaluate models for fairness and transparency, and how to build trust with business stakeholders who were skeptical of black-box outputs.

Outcomes

  • ML team with structured practices for transparency and ethical evaluation
  • Stakeholder communication materials that improved business-team confidence in model outputs
  • Internal documentation standards that persisted beyond the engagement

Approach

  • Led workshops on recommendation model transparency, covering how models generate outputs, where bias can enter, and how to document and communicate model limitations
  • Developed training materials on ethical evaluation practices, including fairness metrics, audit approaches, and documentation standards
  • Built stakeholder communication guides to help ML teams explain model behavior to business audiences without oversimplifying
  • Provided ongoing support and one-on-one guidance during model development cycles
ML TransparencyEthical EvaluationWorkshopsRecommendation SystemsStakeholder Communication

Experience includes work supporting Marriott via Deloitte Consulting. Client names listed for context; details available upon request.

Have a similar challenge?

If your team is navigating the same kind of problem, let us talk through it. A 20-minute call is enough to figure out whether there is a fit.