Zero Trust: We Can Get 90% The Way There

“Zero trust” is talked about a lot, and maybe too much. Sometimes people say, “Zero Trust” when they are talking about something else, such as really good authentication.

But the best way to understand zero trust is to imagine two things; 1) enforcing corporate access rules to any system, app, or data whether they are in the corporate network or on someone else’s application, and 2) Not needing a corporate network to protect the information that an organization owns.

Security engineers are likely to think of many technologies that can make this happen, such as federation, identity access management, single sign-on etc. But Zero Trust requires a lot of coordination among the many systems (and organizations) that would make such a model work.

NIST provides a diagram of Zero Trust architecture in NIST Special Publications 800-207. Take a gander at each componenet and process implied here. In order for zero trust to actually work, every organization, system, application and data store within a zero trust community would need to operate with these components and processes.

 

(Source: National Institute of Standards and Technology, NIST SP 800-207 )

 

That’s expecting a lot more than we do (or can do) now. Product vendors, compliance enforcement, a common language and eco-system for threat intelligence, protocols (technical and legal) for sharing logs and alerts are just not in place.

But that does not mean we are stuck. We can still significantly achieve the objectives of zero trust with tools that already exist. As we look into the history and architecture of zero trust, keep in mind what NIST SP 800- 207 says is the objectives of the standard, “to prevent unauthorized access to data and services coupled with making the access control enforcement as granular as possible.”

A fully deployed zero trust model, operated by many, collaborating organizations would be necessary to fulfill the “as possible” phrase. But what if we changed that phrase to “as plausible?”

 

“Trusted” Users and Corporate Networks

Historically, information security within organizations has been implemented by a perimeter-based network security model, which assumed that users inside the boundary of the corporate network could be considered “trusted”. Security was enforced much more rigorously on users coming from outside the network, which were considered to be “untrusted” by default.

However, that paradigm has broken down for two reasons:

  1. Insider threats cause a significant number of cyber incidents every year.
  2. Increased use of mobile devices and cloud solutions has made perimeter-based security irrelevant for many internal users today.

Let’s look at these two factors a little more closely.

 

Insider threats cause a significant number of cyber incidents every year

 

Cyber incidents don’t just come from outside sources. Many of them come from insider threats as well. Insider threats are users with authorized access to company data whose access is used, either maliciously or unintentionally, to cause harm to the business. Here are five statistics that illustrate the extent of insider threats:

  • As much as 60% of data breaches are caused by insider threats. (Source)
  • Insider threat incidents have risen 44% over the past two years. (Source)
  • Costs per incident over the last two years has risen more than a third to $15.38 million. (Source)
  • 85% of organizations say that they find it difficult to determine the actual damage of an insider attack. (Source)
  • The time to contain an insider threat incident increased from 77 days to 85 days over the last year. (Source)

As you can see, the idea of considering a user inside the network to be “trusted” (just because they’re inside) isn’t viable.

 

Increased use of mobile devices and cloud solutions has made perimeter-based security irrelevant for many internal users today.

 

Not only are insider sourced incidents increasing, but many users are operating outside of a traditional corporate network today. The use of mobile devices has increased dramatically over the past decade, with many businesspeople accessing vital corporate systems through their smartphones from anywhere. And the rise in use of cloud-based solutions (especially since the COVID-19 pandemic forced so many into working remotely) has shattered the idea of a “perimeter” for an organization to protect.

 

Zero Trust Security: How it Started and Evolved

The idea of a new way of looking at security that didn’t favor “trusted insiders” over “untrusted outsiders” began to emerge as far back as 2004. The Jericho Forum (which later merged into The Open Group Security Forum) promoted a new concept of security called de-perimeterisation that focused on protecting a company’s data on multiple levels by using encryption and dynamic data-level authentication. The focus on “secure assets where they are” turned out to be visionary as the iPhone wouldn’t be introduced until three years later in 2007!

In 2010, John Kindervag, who was an analyst at Forrester Research at the time, introduced the term “zero trust,” which was based on the idea that an organization shouldn’t trust any resource whether it was inside or outside its network. Instead, it was important to verify all resources that try to connect to its network before granting access. Kindervag realized that the human emotion, trust, was more than a simple flaw; it represented a major liability for enterprise networks.

In the zero trust model, all network traffic is untrusted no matter its origin.

The original zero trust model had three main principles:

  • Organizations must provide secure access to their networks, no matter the location.
  • Organizations must control access so that users can only access the resources they need, and that access must change if their roles change, or they leave the organization.
  • Organizations must log traffic and inspect the logs to ensure users are adhering to the rules.

Those principles needed to evolve with the increased use of mobile devices and increased adoption of cloud-based solutions.

In 2014, Google rolled out BeyondCorp, the search giant’s implementation of the zero trust security model that shifted access controls from the network perimeter to individual users and devices.

A 2019 Google blog post lists the three main principles of BeyondCorp as:

  • Connecting from a particular network does not determine which service you can access.
  • Access to services is granted based on what the infrastructure knows about you and your device.
  • All access to services must be authenticated, authorized and encrypted for every request (not just the initial access).

With that in mind, Forrester analyst Chase Cunningham launched the Zero Trust eXtended (ZTX) Ecosystem in 2018. The ZTX framework maps technologies and solutions to the framework’s pillars, which have evolved into seven operational domains of Zero Trust: five for security controls and two for interaction across the domains:

 

Operational Domains of Zero Trust (Source: Forrester)

 

 

The Cybersecurity and Infrastructure Security Agency (CISA) and the US Office of Management and Budget (OMB) recognize Forrester’s seven operational domains and add one more – governance – which is reflected in the diagram above.

Today, Forrester identifies the Definition of Modern Zero Trust as follows:

Zero Trust is an information security model that denies access to applications and data by default. Threat prevention is achieved by only granting access to networks and workloads utilizing policy informed by continuous, contextual, risk-based verification across users and their associated devices. Zero Trust advocates these three core principles: All entities are untrusted by default; least privilege access is enforced; and comprehensive security monitoring is implemented.

 

Zero Trust Network Access (ZTNA)

Zero Trust Network Access (ZTNA) was first introduced by Gartner in 2019. ZTNA is defined by Gartner as:

A product or service that creates an identity- and context-based, logical access boundary around an application or set of applications. The applications are hidden from discovery, and access is restricted via a trust broker to a set of named entities. The broker verifies the identity, context and policy adherence of the specified participants before allowing access and prohibits lateral movement elsewhere in the network. This removes application assets from public visibility and significantly reduces the surface area for attack.

 

In short, ZTNA ensures provision of secure access to private applications as opposed to giving the user access to the enterprise network.

Unlike VPNs or firewalls, ZTNA services are designed to securely connect specific entities to each other, without the need for network access. Those secure connection services are not limited to just users, they can also apply to application-to-application traffic as well.

However, there have been shortcomings identified about ZTNA approaches, including support for only coarse-grained access controls, incorporation of an “allow and ignore” approach for both users and app traffic, and little or no advanced security provided consistently across all apps – including rudimentary data loss prevention (DLP). As we’ve seen a dramatic increase in remote and hybrid work since the COVID 19 pandemic, addressing these shortcomings is more important than ever.

 

ZTNA 2.0

As a result, it has taken just three short years for a proposed replacement for ZTNA. Introduced by Palo Alto Networks earlier this year, ZTNA 2.0 is designed to overcome the limitations of legacy ZTNA solutions, to provide secure connections to deliver better security outcomes for businesses with hybrid workforces.

 

Comparison of ZTNA 1.0 and 2.0 (Source: Palo Alto Networks)

 

To effectively solve the shortcomings of ZTNA 1.0 approaches, ZTNA 2.0 is purpose-built to deliver:

  • True least-privileged access: Identify applications based on App-IDs at Layer 7. This enables precise access control at the app and sub-app levels, independent of network constructs like IP and port numbers.
  • Continuous trust verification: Once access to an app is granted, trust is continually assessed based on changes in device posture, user behavior and app behavior. If any suspicious behavior is detected, access can be revoked in real time.
  • Continuous security inspection: Employ deep and ongoing inspection of all traffic, even for allowed connections to prevent all threats, including zero-day threats. This is especially important in scenarios where legitimate user credentials are stolen and used to launch attacks against applications or infrastructure.
  • Protect all data: Apply consistent control of data across all apps used in the enterprise, including private apps and SaaS, with a single DLP policy.
  • Secure all apps: Consistently secure all applications used across the enterprise, including modern cloud native apps, legacy private apps and SaaS apps as well as apps that use dynamic ports and those that leverage server-initiated connections.

 

Here’s an illustration of a ZTNA 2.0 diagram:

Zero Trust Network Access 2.0 Diagram (Source: Palo Alto Networks)

 

ZTNA 2.0 solutions are designed to provide superior security while delivering uncompromised performance and exceptional user experiences, all from a single unified approach.

 

 

Zero Trust Isn’t a “One Size Fits All” Solution

Having said that, let’s get practical for a moment. While it may seem that zero trust is something you can buy in a turnkey, “one size fits all” solution, taking that approach can be expensive and it may not fully guarantee protection of the assets and data that is most sensitive in your organization. A more practical approach is to identify the assets and data that are most sensitive in your organization that may require additional security controls to protect and implement protection mechanisms where they’re most needed.

Those mechanisms include:

  • Multi-Factor Authentication (MFA): Authentication using two or more factors to an authentication mechanism: knowledge (something only the user knows), possession (something only the user has), and inherence (something only the user is).
  • Privileged Access Management (PAM): Cybersecurity strategy to control, monitor, secure and audit all identities across an IT environment.
  • Data at Rest Encryption: A practice of encrypting stored data to prevent unauthorized access.
  • Data Loss Protection (DLP): Software that detects potential data breaches/data ex-filtration transmissions and prevents them by monitoring, detecting and blocking sensitive data while in use, in motion, and at rest.
  • Micro-Segmentation: A network security technique that involves logically dividing the data center into distinct security segments down to the individual workload level, and then defining security controls and deliver services for each unique segment.
  • Next-Generation Firewalls: Part of the third generation of firewall technology, combining a traditional firewall with additional capabilities such as identification of applications that are in use and the users accessing systems and using the applications on endpoints and networks.
  • Network Access Controls (NAC): Unifying endpoint security technology, user or system authentication and network security enforcement.
  • Role-Based Access Control (RBAC): A method of restricting network access based on the roles of individual users within an enterprise.

 

Some of these mechanisms are specific to accessing assets and data within your network, others may apply to accessing assets and data from any application, including cloud applications. They may be accessed by traditional workstations or mobile devices. Establishing the “best fit” for security depends on where your sensitive data resides and how you’re accessing it. A surgical approach to zero trust security is often preferable to an enterprise-wide approach.

 

Tools That Will Get You 90% the Way There

While a surgical approach is preferable, you don’t have to be the surgeon. There are excellent technologies available that will help you achieve zero trust’s objective, even if the infrastructure is not yet available.

In your own environment – whether that is assets in your secured castle-and-moat network or in your extended use of cloud services – you can apply IAM, micro-segmentation, and MFA to achieve NIST’s stated objective, “to prevent unauthorized access to data and services coupled with making the access control enforcement as granular as possible.” Or, as we will put it, “as granular as plausible.”

Micro-segmentation tools, such as Guardicore, will orchestrate policies by enforcing access control lists (ACLs) at applications and systems that are in your control. Keeping in mind that ACLs can filter systems, protocols, users, user groups, IP address ranges and more and you can start to imagine the power of orchestrated ACLs. If a person is authorized to access a certain type of sensitive data, but is on a machine that is not authorized for it, a policy orchestrator can deny that access until she gets to a computer that is allowed that access. If data of a certain class may be copied by any application and not by a user, then micro-segmentation can enforce that rule even if the database engine allows it.

IAM will enforce a common set of role-based access rules on systems and applications that open their authentication APIs to permit shared enforcement.

And MFA will make certain(-ish) that the account user is who they purport to be.

That’s pretty granular. And until the world collaborates well enough to make zero trust possible, we have the tools to makes its objectives plausible.

 

 

Risk Assessment to “Right-Size” Your Zero Trust Approach

A Cyber Security risk assessment is an assessment of an organization’s ability to protect its information and information systems from cyber threats. The purpose of the risk assessment is to identify, assess, and prioritize risks to your assets, data and systems.

Cyber security risk assessments are not only a great idea, they are also required by a growing number of laws, regulations and standards — including the HIPAA Security Rule, PCI DSS, Massachusetts 201 CMR 17.00, SOX Audit Standard 5 and FISMA.

With a risk assessment, your organization can “right-size” your zero trust approach – implementing the combination of mechanisms and approaches to maximize protection of your most sensitive assets and data, now and in the future.

HALOCK’s cyber security risk assessment method is based upon Duty of Care Risk Analysis Standard (DoCRA). This method helps organizations determine whether they apply safeguards that appropriately protect others from harm while presenting a reasonable burden to themselves. This method helps establish if an organization has practiced “due care” in implementing their risk strategy.

 

 

Conclusion

The concept of a zero trust security approach was dictated by factors such as insider threats, increased use of mobile devices and increased use of cloud-based applications. Risks and factors to organizations have changed, and they will continue to change as business needs evolve. However, there is no Staples “easy button” when it comes to zero trust security.

Consider conducting a risk assessment that addresses requirements specified by laws, regulations and standards that practices “due care” in “right-sizing” your zero trust approach. Zero trust security is an important concept to protecting your assets and data, but only if you take a practical approach to it!

 

 

 

SCHEDULE YOUR FULL HALOCK SECURITY BRIEFING