What Are Open Ports?
An open port is a software-defined value that identifies a network endpoint. Any connection made on a TCP/IP network has a source and destination port that are used with the respective IP addresses to uniquely identify the sender and receiver of every message (packet) sent.
Ports are essential to any TCP/IP-based communication—we simply can’t do without them. Misconfigured ports and port vulnerabilities provide threat actors with a dangerous backdoor into the environment. A strong security posture hinges on understanding how ports are being used and how they are being secured. This counts double when those ports are internet-facing, as they are nearly always in the Cloud.
In this blog, I’ll provide an overview of ports, how they are used, some important risks to be aware of, and how to mitigate port-related cybersecurity risks across your environment.
Why Do We Have Ports?
Almost everyone will recognize an IP address as a mechanism of ‘uniquely’ identifying a system on a network. Today, the most common IP address format uses four octets separated by full stops, i.e., 192.168.1.1. This is part of the fourth version of the IP protocol (IPv4). IPv6, the next version, has a far more extensive addressing scheme, comprising of 8 16-bit hexadecimal values separated by colons, i.e., 2001:0db8:85a3:0000:0000:8a2e:0370:7334. We won’t cover IPv6 in this blog post, but as it also uses ports, what follows is equally applicable.
It’s common for each networked system to have at least 1 IP address and for each system to offer multiple services on the network. Ports provide the ability to address these services uniquely. Even the simplest system, such as a web server, offers not only web pages, but also remote access to manage the system itself.
We need a mechanism to go one step deeper in addressing the system to focus on the service. This is where the port comes into play.
You can think of the port as a room number in an office building and the IP address as equivalent to the building’s street address. To get a message to a particular office, you could rely on someone reading an incoming message and looking for indication of whom the message is intended. This method might work for small volumes of messages, but it quickly becomes unmanageable. It’s much easier and faster to add a specific office number to the address, allowing the message to be directly delivered. This method has the additional advantage of enabling the message itself to remain private.
For our web server example, we can be confident that ports 80 and 443 will be relevant for either port 22 or 3389, depending on the type of operating system (and configuration).
- Port 80 is assigned for HTTP (HyperText Transport Protocol) data, the unsecure web protocol that’s increasingly resigned to history,
- Port 443 is the default port for HTTPS data, the secure version of HTTP,
- Port 22 is used for Secure Shell data, the text-based console used primarily with Linux/Unix systems and network devices,
- Port 3389 is assigned for RDP (Remote Desktop Protocol), primarily used for accessing the console of Windows-based systems.
As you’ll have noticed, these port numbers appear to be all over the place. As ports are represented by a 16-bit number, we can use any value from 0 through 65,535 for ports, so why these port numbers? The same organization that allocates IP addresses, the IANA (Internet Assigned Numbers Authority), also allocates port numbers. However, with only 65,536 available ports, it’s far more difficult to get a dedicated port assignment than an IP address.
Here’s how the IANA organizes the ranges of port numbers:
Well-known or Privileged ports (Port range: 0 to 1023) - This range is reserved for the most well-known protocols/applications used over TCP/IP. These ports are assigned only to those protocols and applications that have already been standardized through the TCP/IP RFP process or are undergoing that process. As the services behind these ports are generally key to TCP/IP operations, they are sometimes referred to as System Ports.
Registered ports (Port range: 1024 to 49,151) - Many applications use TCP/IP to exchange data using protocols unique to each of them, and the authors can request a port assignment from this range. One common area is database, with Microsoft SQL Server being assigned ports 1433 and 1434, Oracle database assigned ports 2483 (replaces 1521) and 2484, and PostgreSQL assigned to 5432.
Private/Dynamic ports (Port range: 49,152 to 65,535) - These ports are open for anyone to use and are not reserved or maintained by the IANA. Ports within this range are commonly used by the software that communicate to the well-known and registered ports mentioned above, often called client software. The registered ports provide a way to address the service we want to talk to. Using a dynamic port for the client also ensures that the return traffic gets to the right place. These assignments generally only persist for the duration of a communication session, ensuring there’s always space available.
Services will register as ‘listeners’ on the appropriate ports as defined by the IANA. For example, you will find the DNS server listening on port 53. When an application or client software starts to communicate with a service, it will register to listen on a port in the private range. The outbound packets include the destination IP address and port, as well as the source IP address and port. This model enables both ends of the link to appropriately address the other. As the client listening port can be different for each session, it allows client software to communicate with multiple services on the same server while easily keeping each session independent.
Security Risks of Open Ports
While being essential for enabling networks to work as they do, ports can offer opportunities for would-be attackers.
For example, open ports can contribute toward system identification. Historically, open ports (the default port configuration of a new system installation) offered a fingerprint of ports that could reveal much about the system being investigated. Today, this is less the case, as operating systems default to being tightly locked down at install and non-essential ports only opened as necessary. However, what the services on those necessarily open ports reveal about your system can be surprising.
There are many tools that can scan a target IP address (or range of IP addresses) and report back on ports that are ‘open’, which means that the port has software listening for traffic on it. These tools are easy to use and readily available. Some example port scanners include Nmap, Netcat, Advanced Port Scanner, and many others—including most vulnerability scanners. There are literally thousands of port scanners to choose from, though Nmap, first released in 1997, is undoubtedly the best known. While port scanners are useful tools for IT security practitioners, they also provide valuable information for threat actors.
Below is the output of an Nmap scan against the Nmap’s team test target (scanme.nmap.org). This shows the kind of information you can expect to uncover. While this is a single target, Nmap accepts multiple targets in several formats, including lists and CIDR notation.
You can reasonably expect a web server to have ports 80 and 443 open. However, in the example above, you’ll note just port 80 is open for this test target. If we had port 443 open, with TLS applied, extra information would be revealed in the digital certificate information — i.e., who owns the cert.
As the internet evolves toward more secure access, port 443 (HTTPS) will be the most common port for web traffic. However, you’ll find port 80 still open to redirect HTTP requests to the HTTPS address. An attacker can be relatively confident they have found a web server when they see ports 443 and 80 open. In the example above, port 80 reveals the underlying service is the Apache web server running on Ubuntu (a Linux distribution) and that the service supports the HTTP ‘verbs’ GET, HEAD, POST and OPTIONS.
If we look at the results for a Windows Active Directory Domain Controller (built in my lab using defaults), you’ll notice that I had to specify ‘-Pn’. This tells Nmap to assume the host is up, and not send a ping first to check. The Windows Server build I used has ping disabled by default. I’ve also omitted the ‘-v’, or verbose, option to save some space. You’ll see we still get the relevant information, but less of the scan’s operational info.
You’ll notice an OS identification with a high degree of confidence, along with the SSL/TLS certs being captured. If you dig into either cert (or run Nmap with the verbose option), you’ll find references to the certificate issuing server, revealing even more about the network with no additional effort.
This ”fingerprinting” method offers a mechanism for sub-selecting from the attack target pool, reducing the opportunity to discover a hacker by eliminating large swathes of the activity that would have otherwise been generated. The attacker is far more likely to probe port 3389 on a Windows system, whereas they are unlikely to look at port 22 because SSH is less commonly configured. As a result, that knowledge about the system type can make the hacker’s network traffic appear more normal. Intrusion Detection Systems (IDS) will monitor for the sweep across the ports performed by tools like Nmap. However, by default, Nmap will randomize the order of the port scanning, potentially defeating simplistic IDS.
If you find yourself frustrated by false positives triggered by your port/vulnerability scanning activities, add the source IP addresses for your scanners to the ‘exclude’ list—don’t completely disable monitoring. If you’re unable to exclude source IPs on your IDS, I recommend implementing an alternative solution. While changing solutions may seem extreme, false positives will lead to the undervaluing—or evening ignoring—of alerts.
Implications of Open Ports for Cloud Computing
In the cloud, everything is accessible via open, internet-facing ports. It is essential that all open ports be identified and secured using techniques like access control lists (ACLs).
Port-based hardening functions should be applied across every application and operating system in the cloud. Moreover, all unnecessary or unused ports should be disabled. The asset management process can help build an architectural map for what hardening is required and what will need to be unhardened for a system to correctly operate.
If the port is closed but the application is running, then an attack vector is potentially mitigated. However, proper hardening approaches require both the application and the port to be disabled. Many applications may need a service and its processes operating locally to complete a specific task, but can tolerate that no inbound connections are accepted via a closed port.
Best Practices to Secure Ports and Reduce Threat Exposure
The more pointed and targeted an attack, the less opportunities to prevent or detect the attack at any stage. Those few markers that may exist are within the purview of the attacker to cleanse.
What can you do to minimize the port-related risks in your environment? Let’s look at best practices for hardening and securing ports.
1. Enumerate and understand your open ports
The first step entails discovery. Identify and list all your open ports, and continuously check for any port-related configuration changes. However, just as importantly, you will need to understand and document all port usage across your environment. This information provides a baseline for port security.
2. Close any port not actively needed
This can be a challenge, as the operation of the system may depend on a potentially vulnerable port being open. If that’s true, then Option #2 is appropriate.
3. Restrict port access to specific source IP addresses (or ranges)
This practice is always applicable and offers the best option when Option #1 isn’t available. Not everyone in your environment needs access to the terminal/console of your critical servers—or any of your servers. Restricting access to only the IP range used by your admins and relevant systems (using the service on that port) will minimize the risk profile for the environment.
If you have remote workers who need access to systems, implement a Secure Remote Access (SRA) solution to get them to those systems. Don’t open the perimeter to allow that access, however confident you might be in your VPN configuration and management skills. It only takes one mistake or a previously unknown vulnerability to lead to a major breach.
4. Prioritize vulnerabilities with known exploits for systems exposed to the network
This practice sounds obvious, yet it’s rarely implemented. An open port with no known vulnerabilities is likely to be more secure than one with known vulnerabilities. Doubly so when Option #2 has been applied.
There is always a risk of zero-day vulnerabilities. Other security practices listed here will help mitigate risks related to zero-days. There’s little excuse for leaving a vulnerable service unresolved with your network, particularly one with known exploits.
5. Implement the principle of least privilege on all endpoints
By ensuring all access at the console is operating at the least privilege necessary, any intrusion will be limited to that level of access. Malware is not magic; it’s constrained by the security model in which it’s operating. Keeping the interactive users’ privilege levels at a minimum will constrain even successful attacks, such as by limiting lateral movement.
6. Don’t allow anyone direct access to highly privileged accounts
Users should not have direct privilege associated with their accounts. Provide any access through a Privileged Access/Account Management (PAM) solution, which is also safely behind multi-factor authentication. PAM systems can change credentials after each use and never release the credentials to the user.
This is accomplished by securely brokering necessary sessions, such as by auto-injecting credentials. This best practice has the benefit of ensuring that even where Options #1 through #4 might have failed, the opportunity to exploit a credential that allows lateral movement is reduced to periods where the account is in use.
7. Reduce the exposed information on open ports
As we saw above, the Apache web server not only announces itself, but also details its specific version and the operating system on which it’s running. Removing this information doesn’t increase the inherent security of the system, but it can make you slightly less of a target for an attacker with a specific set of exploits in their toolbox.
This kind of approach is known as ”security through obscurity.” Apache web servers can be limited to returning “Apache”, excluding the version number. Perhaps the attacker is looking for specific Apache versions, so not presenting this information may get you passed over–at least, at first. Hopefully, your IDS will notice the attention and allow you to address Option #3 above as a matter of urgency. Apache web server, like many services, can run on any number of operating systems, so not revealing this information may buy you some time.
Next Steps – Check for open ports and access-related security risks
Ensuring underlying systems are secured in-line with industry best practices—including effective vulnerability management, secure remote access, controlled and monitored access to privileged accounts and sensitive/critical systems, and endpoint privilege management—will greatly enhance your cybersecurity posture.
While you may have hardened your environment with secure ports, privileges, and access pathways at a moment in time, you must stay continuously mindful of changes to your own network. Changes to a system’s function, whether in total or due to expanding use cases, will result in expanded port requirements, if network services are added.
To illuminate access-related risks including open ports, overprivileged accounts, misconfigurations, privileged credentials, remote access tools, and more across your environment, try BeyondTrust’s free Privileged Access Discovery Application, the most powerful free tool of its kind. Use the application to identify and mitigate high-risk areas quickly, while working diligently to understand and stay on top of your risk over time.
Brian Chappell, Chief Security Strategist
Brian has more than 30 years of IT and cybersecurity experience in a career that has spanned system integrators, PC and Software vendors, and high-tech multi-nationals. He has held senior roles in both the vendor and the enterprise space in companies such as Amstrad plc, BBC Television, GlaxoSmithKline, and BeyondTrust. At BeyondTrust, Brian has led Sales Engineering across EMEA and APAC, Product Management globally for Privileged Password Management, and now focuses on security strategy both internally and externally. Brian can also be found speaking at conferences, authoring articles and blog posts, as well as providing expert commentary for the world press.