Home Web internet What Exposed OPA Servers Can Tell You About Your Applications

What Exposed OPA Servers Can Tell You About Your Applications


With the appropriate request or token, an attacker could gain even more information about these services and scan for vulnerabilities or other entry points to break into an organization’s systems. We strongly recommend that companies currently use OPA as a policy-as-code solution to ensure they don’t unintentionally expose their APIs and policies online. In some cases, companies might use OPA without realizing it; several Kubernetes managed service providers rely on OPA for policy enforcement.

Keep in mind that we only polled list policy endpoints from REST API for ethical reasons. However, there are many other endpoints and methods available that not only list sensitive information, but also allow an attacker to modify or even delete data and objects from an exposed OPA server. Some of them are:

Create or update a policy

PUT /v1/policies/
Delete a policy REMOVE /v1/policies/
Patching a document (Data API) PATCH /v1/data/{path:.+}
Delete a document (Data API): DELETE /v1/data/{path:.+}

All of these can be found in the OPA REST API Documentation.

Protection of OPA servers

First, OPA servers should not be exposed to the Internet. It is therefore necessary to restrict this access to prevent anyone from digging into your OPA configurations via the REST API. The standard way to deploy OPA for the authorization use case is to run OPA on the same machine as the application requesting decisions from it. This way, organizations would not need to expose OPA to the internet or internal network, as the communication takes place through the localhost interface. Additionally, deploying OPA in this way means that organizations generally won’t need to enable authentication/authorization for the REST API, as only a process running on the same machine would be able to do so. query the OPA instance. To do this, OPA can be started with “opa run –addr localhost:8181” to only bind to the localhost interface.

Second, when using a policy tool as code such as OPA, it is important to protect the policies in a location such as a source code management system (SCM). It’s also essential to have proper access controls in place to monitor who can change what in these policies through features like branch protection and code owners. With the power of the SCM system, organizations can create a more streamlined process for reviewing and approving any changes to these policies, ensuring that everything in the source code is also reflected in the production OPA servers.


As shown in Figure 4, most of these exposed OPA servers found on Shodan were not using any type of encryption for communication, as this is not enabled by default. To configure TLS and HTTPS, system administrators must create a certificate and private key, and supply the following command-line flags:

  • The path of the TLS certificate: –tls-cert-file=
  • The path to the TLS private key: –tls-private-key-file=

For up-to-date information on this process, please consult the OPA TLS and HTTPS documentation.

Authentication and Authorization

By default, OPA authentication and authorization mechanisms are disabled. This is described in the official OPA documentation, and it is essential that system administrators and DevOps engineers enable these mechanisms immediately after installation.

Both mechanisms can be configured via the following command line flags according to the OPA documentation:

  • Authentication: –authentication=.
    It can be bearer tokens (–authentication=token) or client TLS certificates (–authentication=tls).
  • Authorization: –authorization=.
    This uses Rego policies to decide who can do what in OPA. It can be enabled by setting the –authorization=basic when starting OPA and providing a minimal authorization policy.

More details regarding this process can be found in the official OPA document authentication and authorization documentation.

Cloud security recommendations

Kubernetes is one of the most popular platforms among developers, as proven by its high adoption rate which shows no signs of slowing down anytime soon. With an ever-expanding user base, Kubernetes deployments need to be protected against threats and risks. To do this, developers can turn to policy-as-code tools, which can help implement controls and validate procedures in an automated way.

In addition to diligently applying some basic management rules to keep Kubernetes clusters secure, organizations can also benefit from cloud-specific security solutions such as Trend Micro™ Hybrid Cloud Security and Trend Micro Cloud One™.

Trend Micro helps DevOps teams build securely, ship fast, and operate anywhere. The Trend Micro™ Hybrid Cloud Security solution provides powerful, streamlined, and automated security within the organization’s DevOps pipeline and provides multiple XGen™ threat defense techniques to protect physical, virtual, and cloud workloads in the pipeline. ‘execution. It is powered by the Cloud One™ platform, which provides organizations with unique insight into their hybrid cloud environments and real-time security through network security, workload security, container security , Application Security, File Storage Security and Compliance Services.

For organizations seeking runtime workloads, container images, and file and object storage security as software, Deep Security™ scans workloads and container images Scan for malware and vulnerabilities at any point in the development pipeline to prevent threats before they are deployed.

Trend Micro™ Cloud One™ is a security services platform for cloud builders. It provides automated protection for cloud migration, cloud-native application development, and cloud operational excellence. It also helps identify and resolve security issues faster and improve delivery times for DevOps teams. It includes the following elements: