Vendors, service providers, and even government agencies, have been rapidly deploying chat-based features on their websites to field requests from sales to support. These chat applications are designed to field plain text requests from humans that are fed into an artificial intelligence engine, which can provide “smart”, scripted responses to inquiries.
With the machine learning technology powering many of these chat applications continuing to get smarter, users are less likely to discern whether they are interacting with a real person or a machine response. Some services classified as “conversation marketing” and not chatbots—may actually route you to the appropriate live person for a more in-depth conversation. However, with a little social engineering, a threat actor can determine what is behind the scenes.
In 2018, we witnessed several breaches that exploited basic security flaws in chatbots, including Delta Airlines and Sears, via their vendor supply chains. This new trend warrants increased attention and, as with any new technology, the consideration of cybersecurity best practices to be adopted.
Addressing the IT Security Vacuum around Business Chatbots
Regardless of whether it’s a human or machine, however, there are some interesting security risks inherent of chat-based services. Yet, if you scour the web for security best practices around implementing chatbots as a support extension for your business, there is very little attention and guidance around how keep it secure for both your company and for the end user.
For starters, consider an automated service that is either hosted by the company itself or connected to a cloud-based artificial intelligent engine as a service. This service needs to access backend resources in order to effectively respond to information requests. Such a design will typically include a database fronted by middleware that allows queries via a secure application programming interface (API). The contents of the database will vary from company to company and may include anything from airline flight information to customer data—and it may even accept credit card information.
When this process is completely automated and AI-driven, the following 8 security considerations should be raised and addressed:
- Is the API connecting your organization’s website and the chatbot engine secured using access control lists (ACLs)? This is typically done by IP addresses or other contextual parameters, like geofencing.
- What is the authentication mechanism between the systems (webservice, engine, middleware, cloud, etc.), and how is it managed? Are any of them an RPA (Robotic Process Automation) solution with its own unique authentication requirements?
- Do you assess the architecture for vulnerabilities, apply security best practices like least privilege, and implement periodic penetration tests?
- What data can the chatbot query, and is any of it considered sensitive, like personally identifiable information (PII), which might be subject to regulations? Do old communications “self-destruct” in accordance with certain regulations?
- How do you log and detect potential suspicious queries that may be designed to compromise the artificial intelligence engine or inappropriately expose data?
- How do you mitigate or prevent malware or distributed denial of services (DDOS) waged against your service?
- Are the chat communications end-to-end encrypted? Using what encryption protocols?
- Does the communication contain information that may warrant extending your scope of regulations, like PCI DSS?
These are fundamental security questions any IT/IT security team should ask before implementing a chatbot. In addition, organizations should continuously inventory the supply chain based on assets and communications from chatbot, webservice, and provider to maintain a risk assessment plan. Any changes can easily affect some of the best practices listed above.
Addressing the Cyber Risks of Conversation Marketing
When it comes to conversational marketing, all of the automation chatbot best practices and questions remain relevant, however, instead of automated AI engines providing the responses, a human is on the other side of the chat window. Typically, organizations try to make the experience “authentic” and do not use fake names or pictures for the human chat box representative. This is where security best practices begin to blur.
If an organization displays the full name of their chat representative within the chat box, a little social engineering may easily uncover an abundance of data about the representative, particularly if the representative has a presence on social media channels. This alludes to our security best practices for human respondents…
- Never reveal full employee names for conversation marketing chat services. While not perfect, the ideal scenario is to use an alias, though that might seem to undermine the nature of a more authentically personal customer service experience. While using only a first name and possibly last initial will help obfuscate the employee’s identity, this still poses a heightened risk as a little research could likely yield the representative’s full name and other details.
- If the chat service displays a picture, photo, or avatar of the representative, use a unique image that is found nowhere else on the web. A simple search of John D* and company name will reveal their social media presence and, if the pictures easily match, you might as well use their full name anyway—you have done very little to mask their identity and provide protection from a social engineering attack at home or at work.
- Provide a list of acceptable information to share via a chat box, and clearly define what information should never be sent, no matter what the inquiry. These information parameters will vary by organization and can include everything from license keys to password resets. Your business will have to establish this list based on the services the chat box provides and any regulations governing data exposure, especially across country lines.
- Establish a formal support and escalation path for inquiries into potentially sensitive information. Your chat box may be open to the entire web, and a cybercriminal will not hesitate behind a keyboard to try and dupe an unsuspecting employee to divulge information.
- Provide security training for all chat box representatives so that they know how to recognize a potential attack, how to respond to suspicious or inappropriate inquires, and how to escalate a situation before it morphs into a security liability.
Chatbots and conversation marketing tools are becoming increasingly pervasive and are here to stay. Threat actors will always use the easiest path to compromise an organization and, unfortunately, humans are a frequently exploited piece of the cyber-attack chain.
By considering the key questions and implementing the best practices I’ve outlined above, you can help ensure that you are on the path of effectively deploying chat services to improve your customer support and acquisition experience, while keeping your organization, chat representatives, and end-users protected from cyberthreats that seek to exploit this growing attack surface.

Morey J. Haber, Chief Security Officer, BeyondTrust
Morey J. Haber is the Chief Security Officer at BeyondTrust. He has more than 25 years of IT industry experience and has authored four books: Privileged Attack Vectors, Asset Attack Vectors, Identity Attack Vectors, and Cloud Attack Vectors. He is a founding member of the industry group Transparency in Cyber, and in 2020 was elected to the Identity Defined Security Alliance (IDSA) Executive Advisory Board. Morey currently oversees BeyondTrust security and governance for corporate and cloud based solutions and regularly consults for global periodicals and media. He originally joined BeyondTrust in 2012 as a part of the eEye Digital Security acquisition where he served as a Product Owner and Solutions Engineer since 2004. Prior to eEye, he was Beta Development Manager for Computer Associates, Inc. He began his career as Reliability and Maintainability Engineer for a government contractor building flight and training simulators. He earned a Bachelor of Science degree in Electrical Engineering from the State University of New York at Stony Brook.