Hummingbird AI: Security & Privacy Overview
Last updated
Last updated
Hummingbird is committed to building AI products that are safe and secure and meet the strict standards of risk & compliance organizations. Below, you will find an overview of Hummingbird AI functionality and related security and privacy practices.
Keep in mind that Hummingbird AI falls under the same security program as the rest of Hummingbird. For general and additional information, visit our Trust Center.
Hummingbird offers a suite of AI-powered tools for risk & compliance teams, which are integrated directly into the platform. Current Hummingbird AI capabilities include:
File Summarization: Quickly glean key details from case files with AI summaries. From onboarding questionnaires to legal records, AI summaries help investigators get the essential information they need, without having to search for it.
Narrative Generation: Speed up regulatory reporting by using AI to generate SAR/STR narratives. Hummingbird AI delivers an accurate, complete first draft for investigators to review and edit. Narratives are generated using your case data and can be delivered in a standardized format using templates.
Narrative Validation: Improve the quality of your SARs by validating that narratives are accurate, error-free, and comply with FIU guidelines. Hummingbird AI instantly checks that SAR narratives include all necessary information, are free from spelling mistakes, and reflect your case data.
Hummingbird AI uses third-party large language models (LLMs), which are hosted within Hummingbird’s own secure instances in our Azure cloud, benefiting from the enterprise-grade security benefits that come from hosting in Azure. Each Azure instance is locked down, and no customer data is stored in Azure. The models that Hummingbird uses are stateless, meaning no data is retained from the requests made to the LLMs. This helps to keep your data safe, prevent authorized access, and minimize the risk of data breaches.
When you request an AI summary, Hummingbird uses our standard AWS architecture and third-party models, hosted within our own Azure instances, to extract and summarize the text. Hummingbird AI then returns the response to you.
Depending on the specific details of the narrative being generated, Hummingbird sends several different custom-engineered prompts to the LLM along with the relevant case data. Hummingbird AI then streams the response back to you as the LLM composes it.
When you validate a narrative using AI, Hummingbird sends a host of custom-engineered prompts to the LLM along with your composed narrative and relevant case data. The LLM then checks your narrative against the various prompts and case data, and Hummingbird AI returns the responses to you as either failed or successful validation checks.
No. Hummingbird does not use customer data to train or fine-tune AI models.
In the future, Hummingbird may train our own models using aggregated and de-identified data from customers who have explicitly granted Hummingbird the right to use their data. At no point will Hummingbird use customer PII or other sensitive data to train models.
Hummingbird AI is designed to comply with Hummingbird’s standard security practices and compliance standards, which includes SOC 2 Type 2 certification. This is in addition to the AI-specific security practices described above and detailed here.
To learn more or request additional information, visit our Trust Center.
Yes, the use of Hummingbird AI is optional for all organizations. Admins can request to disable AI features. Visibility and access to some Hummingbird AI capabilities can also be managed using badge permissions.