AI and ML Methodology
At Flightcrew, we use a mix of cutting edge AI and ‘old fashioned’ ML to build the world's most comprehensive model of cloud infrastructure. Here’s a brief overview of the methods used at Flightcrew
We use LLMs to place Flightcrew’s insights into developer workflows
- Writing PRs and Comments. We ask LLMs to wrap a pre-existing code change and analysis in a PR or Comment.
- Code:Cloud Mapping. We use LLMs to map a line of config to the workloads it controls in the cloud. We also provide options to manually define workload groupings.
- Pattern Matching. We use LLMs and semantic methods to relate infrastructure health to common vulnerabilities and possible solutions
- Explanations. LLMs offer clear explanations for our system’s recommendations, making technical concepts accessible to all stakeholders.
We use more ‘traditional’ ML and Semantic methods to understand your cloud infrastructure
- Understanding Infrastructure Health. We use ML methods to infer application health from your cloud and observability data.
- Impact Analysis. Flightcrew uses ML methods such as Gaussian Neural Networks to predict what will happen from a proposed configuration change.
- Dependency Analysis. We use semantic methods to link code and cloud workloads, and use clustering to identify correlations between cloud entities.
- Identifying & Recommending YAML. Flightcrew doesn’t use LLMs to generate config structure or values
Q&A
How does Flightcrew diagnose health of my cloud infrastructure?
Flightcrew uses Gaussian (‘real-time’) Neural Networks (‘non-linear’) models to understand how your cloud infrastructure reacts to load.
We also maintain a graph of common reliability and compliance vulnerabilities- this db can be augmented with an organization’s preferences (SLOs, Policies, etc) and context (Incidents, Architecture, etc).
Can I customize Flightcrew’s insights?
Yes! Flightcrew learns from your code, cloud, and observability data but if you need to provide more direction
- You can control Training Data (‘Over the last 90 days’ )
- You can provide additional Preferences (99thp_response_rate < X, min_replicas > 2, SOC2)
- You can provide additional Context (ex: ‘Not restartable’)
You can also provide feedback to Flightcrew wherever you interact with it (PRs, Slack, UI, etc)
Can Flightcrew Hallucinate?
No - Flightcrew can’t hallucinate about your infrastructure recommendations. Flightcrew generates recommendations based on data from your code, cloud and observability. As long as you think your past data can be representative of future data then you can trust Flightcrew.
That said, Flightcrew’s LLMs will occasionally use poor/confusing grammar.
Does Flightcrew use External Data?
Flightcrew uses some external data to understand common behaviors and vulnerabilities of cloud infrastructure (ex: Kubernetes scaling behavior), but our actual recommendations come from your local data.
We won’t use data from your cloud to inform our global understanding of cloud infrastructure or train an external model.
Flightcrew looks off? What’s going on?
Most likely any weird data comes from a change in observability data and/or aggregating observability data at a different resolution than you are used to (Deployment vs Container).