Zero Trust Explained – A Vendor Neutral Look at What is Required for Zero Trust

I’ve recently recorded another video course. My new course focuses on Zero Trust. In this course, I cover how the concept of Zero Trust was created, how it has changed over the years, in key ingredients need to build a successful Zero Trust strategy. Here is technical conceptual breakdown of Zero Trust without mentioning a single vendor. 

There are many publications on this topic including guidelines, vendor papers, etc. which all tend to have “Zero Trust Principals” that make up required capabilities or outcomes. Rather than lay out a dozen principals, I believe there are four specific requirements. Those are the following:

  • Continuously verify context: I’ve heard different ways to basically say the same thing, which is to continuously validate the who or what as well as all associated risk and context. This means not only validating who with multifactor solutions or what with certificates, but also collect other details such as time of connection, location, and current risk. Risk is impacted by the current state of the system including if it’s or no running proper software, patched, secured, etc. This all needs to be checked upon connection request and based on the current data, a decision is made. 
  • Provide least privilege access: Limiting access to only what is needed has been and will always be a fundamental Zero Trust principal. 
  • Dynamic segmentation: Not only should access be limited to what is needed, but all forms of pivoting from that resource need to be removed. For example, an Administrator shouldn’t be able to access a system, launch SSH, and pivot to another system. The Administrator needs to run through the enforcement point for each connection. 
  • Assume breach: Essentially, this last principal states the security operation center needs to monitor the entire system with a viewpoint that a threat actor will breach different parts of the architecture. By assuming breach, it is assumed a defense in depth design is required meaning this principal is enforcing the need for proper security to exist on hosts, services, networks, etc. I see this last principal as “other security” beyond what is in focus for Zero Trust capabilities. 

There are different ways to meet these principals however there is a general approach regarding how all these principals come together. First, the core design means you have something trying to access something else. That could be a user, application, etc. trying to access a database, application, etc. Let’s say it’s “A” trying to access “B”. Both need to be collapsed into one thing rather than having different Zero Trust architectures for different As and Bs to reduce complexity. The design essentially is saying THERE IS NO INSIDE or OUTSIDE level of trust. Its Zero Trust for everything regardless of where its connecting from or what it is. 

Collapsing B means the resources need to seem as if they exist in the same place to the requestor. It shouldn’t matter if it exists within an organization, within one or more cloud service providers or if it’s a SaaS service. There are different ways to collapse B such as connecting a on premise and cloud datacenter so they share data and offer the application the same way regardless where the connecting comes from. Another approach is deploying some connector within the data center and cloud, which the connector pushes all access through some secure access service edge SASE provider. However collapsing is done, there should be one B or end result from the A’s viewpoint when accessing resources. 

Next, there needs to be a policy enforcement point. This is what validates principal 1 and determines if access is granted or not. The policy enforcement point could live at three different points of the request between A and B. One option is living on A meaning it’s an agent installed on the requestor. As A requests access, traffic is directed through the enforcement point before provisioned to B. A classic example of this is a always on VPN however, there are different approaches to make the agent concept work. 

A second option for the policy enforcement point is creating a pitstop in the cloud aka what Secure Access Service Edge SASE is all about. Traffic can be pushed through this SASE environment using an agent, routed via SD-WAN or whatever means to ensure all A’s have to go through the cloud pitstop before they can access B. 

A third approach is placing the enforcement point at the edge of B. A common example is having the application redirect all requests through a policy enforcement point before access is granted. This can work if all access to the application have to go through the edge. There can’t be an exception including for developers. 

However you decide to meet these requirements for a centralized policy enforcement point, you will typically have resource A have to first go through a policy enforcement point before granted access to resource B. Principal 2 and 3 are met by having the policy enforcement point grant only access to what is needed as well as ensuring no other access is granted through the implementation of dynamic segmentation. This could be accomplished using a session-based access IE no IP address session or implementing super strict segmentation. 

Lastly, the security operation center needs to meet principal 4 by having access to all security tools associated with the Zero Trust architecture so they can continuously monitor all requests from A to B as well as validate no unauthorized requests are occurring to B. This allows for creating baselines of access activity along with monitoring the risk associated with such users as they access resources. 

There are different vendors that lay out various versions of Zero Trust architectures, which all tend to follow this design. If you want to learn more about the dirty details, check out my video course coming soon to Pearson. 

Leave a Reply

Your email address will not be published. Required fields are marked *

Time limit is exhausted. Please reload CAPTCHA.