CassidyAI
Cassidy AI, in the context of cybersecurity, is defined as a specialized Enterprise AI Workspace platform that is built to connect large language models (LLMs) with an organization's internal, proprietary data. This focus makes Cassidy AI a critical element in an organization's security posture due to its handling of sensitive information and its commitment to specific governance standards.
Cassidy AI's cybersecurity significance rests on three core pillars:
1. Centralized Knowledge Base and Data Security
Cassidy AI acts as a secure, centralized hub for a company's internal knowledge (documents, Slack, Notion, SharePoint, etc.). This makes it a high-value target for attackers but also provides a framework for defense.
Risk: Centralized Data Leakage: Because Cassidy uses Retrieval-Augmented Generation (RAG), it queries the company’s internal data in real-time to generate accurate answers for employees (e.g., HR policies, customer support responses, RFP answers). If an attacker breaches the platform or its integrations, they gain access to a consolidated trove of the company’s most sensitive information.
Defense: Granular Access Control: To mitigate this, Cassidy emphasizes granular access control and row-level security. This is a critical cybersecurity control that ensures an employee can only access data (and receive AI-generated answers) that they are authorized to see according to their existing organizational permissions, thus preventing widespread data leakage from a single compromised account.
Data Isolation Assurance: The company commits to never using customer data to train its underlying models. This commitment is vital for enterprise security, as it protects intellectual property and confidential data from being unintentionally exposed or copied by the AI vendor.
2. Workflow Automation and Agentic Risk
Cassidy AI goes beyond simple Q&A by supporting agentic workflows and automation that can connect to and take actions in other enterprise tools (e.g., prioritizing Zendesk tickets, drafting sales emails).
Risk: Expanded Attack Surface: Agentic workflows elevate the risk profile. A successful attack, such as prompt injection or unauthorized access, could allow an adversary to compromise the AI assistant and then instruct it to take harmful actions in downstream systems, like sending unauthorized emails, moving confidential files in SharePoint, or querying a high volume of customer data.
Defense: Secure Integrations: The platform must securely manage the credentials and connections to all third-party systems. Cassidy AI's security must enforce secure authentication and encryption for all data transmitted between the AI platform and the integrated tools, such as Slack or Zendesk.
3. Compliance and Trust Frameworks
Cassidy AI is explicitly built for the enterprise, adhering to established security and privacy standards to gain customer trust.
SOC 2 Type II Compliance: This certification is a formal assurance that the company has implemented and maintains strict internal controls related to the security, confidentiality, integrity, and availability of its systems and the data it processes. This is a fundamental requirement for most large organizations.
Data Privacy Compliance: Adherence to regulations like GDPR (General Data Protection Regulation) ensures that the platform has controls in place for data sovereignty, user privacy, and proper data handling practices, which are necessary for global commercial use.
Cassidy AI is a key component of modern AI Attack Surface Management, where the cybersecurity focus shifts from securing a generic application to securing the complex data flows, access permissions, and automated actions initiated by a centralized, intelligence-driven platform.
ThreatNG's capabilities, especially its focus on External Attack Surface Management (EASM) and Digital Risk Protection (DRP), are highly effective in securing the organization's integration with Cassidy AI. It functions by detecting external misconfigurations, credential leaks, and digital risk that could lead to an attacker compromising the perimeter and gaining unauthorized access to the sensitive internal data and agentic workflows managed by Cassidy AI.
External Discovery and Continuous Monitoring
ThreatNG's External Discovery is crucial for identifying the unmanaged or exposed interfaces that connect to the Cassidy AI platform. It performs purely external unauthenticated discovery using no connectors, providing an attacker's view.
API Endpoint Discovery: An organization must expose an interface or API gateway to enable employees to access and interact with the Cassidy AI platform. ThreatNG discovers these externally facing Subdomains and APIs, providing a critical inventory of the connection points an attacker could target with brute-force attacks or vulnerability exploits.
Shadow AI Discovery: If a department begins using Cassidy AI outside of approved IT channels, ThreatNG's Continuous Monitoring will detect the new, unmanaged cloud assets (IP addresses or Subdomains) spun up for this purpose, immediately flagging the presence of Shadow AI before it can become a permanent, unsecured data store.
Code Repository Exposure (Credential Leakage): The most direct path to compromising an AI platform is credential theft. ThreatNG's Code Repository Exposure discovers public repositories and investigates their contents for Access Credentials. An example is finding a publicly committed API Key or sensitive Configuration File that grants access to the Cassidy AI platform or the data stores it connects to (e.g., SharePoint credentials), which directly enables an adversary to bypass front-end security.
Investigation Modules and Technology Identification
ThreatNG’s Investigation Modules provide the essential context to confirm that an exposure is related to the high-value Cassidy AI platform, ensuring findings are prioritized.
Detailed Investigation Examples
DNS Intelligence and AI/ML Identification: The DNS Intelligence module includes Vendor and Technology Identification. ThreatNG can identify if an external asset's Technology Stack is running services from AI Model & Platform Providers or AI Development & MLOps tools. While it may not specifically name "CassidyAI," it can identify the underlying cloud or container technologies used to host the service, or the use of specific API management systems, confirming the exposed asset is part of the sensitive AI environment.
Search Engine Exploitation for Private Prompts/Workflows: The Search Engine Attack Surface can identify sensitive information that search engines have inadvertently indexed. An example is discovering an exposed JSON File or log file containing internal prompts or the detailed structure of an automated workflow. This leak provides an attacker with the necessary blueprint to craft a targeted prompt injection attack, enabling them to manipulate the AI agent's actions in downstream systems.
Cloud and SaaS Exposure for Unsecured Integrations: ThreatNG identifies public cloud services (Open Exposed Cloud Buckets). An example is finding an exposed bucket used to stage documents before they are fed into Cassidy AI for knowledge indexing. This misconfiguration exposes the organization's knowledge base and proprietary data to public access.
External Assessment and Platform Risk
ThreatNG's external assessments quantify the security risk introduced by the AI platform's exposure.
Detailed Assessment Examples
Cyber Risk Exposure: This score is highly influenced by exposed credentials. The discovery of an exposed platform API Key or service credential via Code Repository Exposure immediately increases the Cyber Risk Exposure score. This signals a direct, high-impact threat to the confidentiality and integrity of the data housed within Cassidy AI.
Data Leak Susceptibility: This assessment is based on Dark Web Presence and Cloud and SaaS Exposure. Suppose ThreatNG detects an Open Exposed Cloud Bucket linked to the AI's data indexing or finds Compromised Credentials associated with an employee on the Dark Web. In that case, the Data Leak Susceptibility score will be high, indicating a direct path to accessing the platform's consolidated knowledge base.
Web Application Hijack Susceptibility: This score addresses the security of the web interface used to interact with the platform. If the front-end application is found to have a critical vulnerability, an attacker could exploit it to steal user session tokens, allowing them to impersonate an authorized user and access the AI workspace.
Intelligence Repositories and Reporting
ThreatNG’s intelligence and reporting structure ensure efficient, prioritized response to exposures involving the critical AI platform.
DarCache Vulnerability and Prioritization: When the web server or application gateway hosting the Cassidy AI interface is found to be vulnerable, the DarCache Vulnerability checks for inclusion in the KEV (Known Exploited Vulnerabilities) list. This enables security teams to prioritize patching infrastructure flaws that attackers are most likely to exploit to breach the AI platform's perimeter.
Reporting: Reports are Prioritized (High, Medium, Low) and include Reasoning and Recommendations. This ensures teams quickly understand the risk, e.g., "High Risk: Exposed IAM Key, Reasoning: Direct access to RAG data stores and agentic workflows possible, Recommendation: Immediately revoke key and audit all source code."
Complementary Solutions
ThreatNG's external intelligence on Cassidy AI exposures works synergistically with internal security solutions.
Cloud Access Security Broker (CASB) Tools: When ThreatNG identifies an exposed Cloud Storage Bucket (a confirmed misconfiguration) containing data for the AI platform, the external discovery data is utilized by a complementary CASB solution. The CASB can then leverage this information to automatically enforce data loss prevention (DLP) policies, restricting any unauthorized sharing or transfer of documents that are destined for the AI's knowledge base.
Identity and Access Management (IAM) Platforms: The discovery of a leaked access credential by Code Repository Exposure is fed to a complementary IAM platform (like Microsoft Entra ID or Ping Identity). This synergy enables the IAM system to instantly force a password or key rotation for the compromised account, thereby neutralizing the threat before an attacker can exploit the credential to log into the AI platform.
AI/ML Security Platforms (Prompt Injecting Monitoring): ThreatNG's finding of exposed prompts or detailed workflow logic is shared with a complementary AI security platform. The security platform can then use this context to refine its Adversarial AI Readiness detection capabilities, improving its ability to spot and block malicious prompt injection attempts targeting the exposed workflow logic.