What is the point of Zero Trust in access management?

Article

Jun 9, 2023

0 min read

Zero trust emerged in the mid 2010s in response to continued failings of static defenses to keep attackers out. Static defenses—classic, medieval castle-style defense in depth—assumed a perfectly built security architecture would successfully lock bad actors out of a network. Only authenticated users with correct permissions could pass the gates and guards—firewalls, intrusion detection, signature-based defenses against malware, and extensive and expensive logging. But real world data showed the static defense model was failing. Successful cyberattacks were increasing, not decreasing. What was going wrong?

The sanctity of a user’s identity lives at the center of a successful cybersecurity plan. While tactics may vary by sector and purpose, fundamentally we want the right user to get to the right data, and only that user and only that data. Identity demands authentication, and authorization permits access.

Verifying one’s identity through authentication (usually a password) became the primary target for offense and defense. We authenticate someone’s identity through three classically accepted methods:

  1. What you know,

  2. What you have, and

  3. What you are.

Really, what you are should be the first and final answer, but biometric infrastructure isn’t sufficient, and the privacy concerns are real. What you know is usually a password. What you have is a physical token or device generating a time-based code, typically used to counter someone stealing your password. What you are would be a biometric, such as a fingerprint, facial recognition, or retinal scan.

When a user authenticates their identity to a system for which they have authorization, the classic model was to trust that device (also known as an endpoint, which we will use interchangeably with devices going forward. Endpoint is a weird term used within the tech sector, used by no layperson I’ve ever met, but as it’s common parlance we’ll be using it here too).

Before zero trust, an endpoint was imbued with the authorization attributes of the human’s identity. In other words, once a human user authenticated their identity with the authorized system, the endpoint got the same access privileges the human was accorded. It became a “trusted endpoint.” If there were malware on that endpoint, or the device were stolen, or someone simply read over your shoulder then oops. The trusted endpoint became an unauthorized access method to data and systems.

Because humans aren’t in the machine, zero trust posits that we must become skeptics of a trusted endpoint. We cannot simply accept an identified and authenticated user is the sole entity involved in a session. We must instead assume the user is untrustworthy. We have zero trust.

Zero Trust in Practice

In practice, zero trust accepts user identity verification and authorization, and then challenges the veracity of a trusted session. Does the identity start performing unusual actions, such as accessing file systems they don’t normally or downloading unusual amounts of data? Is behavior occurring during different hours than normal? Is their geolocation appropriate? Perhaps, at a device level, is their typing pattern what we expect or when they’re walking is their gait and stride correct for the human behind the identity we’re allowing access for?

Disposable Components

These are mostly illustrative examples readily understandable for identity. Similar ones exist for the endpoints themselves: if we decide all endpoints are untrustworthy, then their access window must be minimized. We can compartmentalize the endpoint a user is on from its target system by placing their session into a segmented, isolated environment. Virtualization allows us to destroy session isolation environments and rebuild them from validated images over and over again. This method of disposable intermediate components means we do not need to trust that a user didn’t track mud, malware, or malicious actors into our clean environment when we granted them a session—at the end of their session we instead destroy the infrastructure they used. By destroying the intermediate endpoints, we delete malware and eject unauthorized users from our systems. Imagine the same practice in another field: medicine. Throwing away a medical glove is far easier than trying to scrub pathogens off our hands after treating a patient. We’ll still use soap and water (firewalls, intrusion detection and prevention, heuristic and signature-based defenses, etc.), but it’s far easier and less costly to prevent an infection rather than cure one when all it takes is a simple protective barrier.

Moving Target Defense

Let’s go back to that holy grail: a user’s identity. We talked about the original data we had to protect: their password. It’s what they know. But a password is not the only key piece of information an authorized user knows. They also know the location where they’re authenticating. This takes a moment to process, because we treat that information as commonplace. In our everyday lives we know the URLs of websites we visit, and then log into.

But take a step back and realize that if someone picks up a random password from the street, they don’t know where to use it. It’s not great the password is out there—they could write a script to start trying to log into every website out there—but that brute force guessing method is extremely time-expensive. Knowing where to log in means we know where to attack. Location data falls within the very first step of a successful attack, reconnaissance, on the dramatically named Cyber Kill Chain.

We deny the information needed to target an attack using a technology called moving target defense. When a session ends or an attack commences, we move the location of the entrance to the target systems. The attacker must find the entrance all over again before they can begin the process again. In the real world, this method of concealment and maneuver is commonplace: militaries user camouflage and highly mobile vehicles to evade detection and destruction. Nuclear submarines are prized far above missile silos. If, as a way of hiding in a conflict, you were asked to wear bright neon with a flashing light atop your head in the middle of an open field, and further told to broadcast your location—trusting your body armor to protect you—you would look at the requester with some askance. They’re surely mad. Yet we do precisely that all the time with static defenses and networks when we don’t safeguard the location of our critical systems.

Zero trust argues any information useful to accessing a system should be denied or destroyed at the earliest possible moment. By altering the entrance network topology, through the use of disposable components spread across and hidden within public cloud providers, we achieve the zero trust objective.

Zero trust emerged in the mid 2010s in response to continued failings of static defenses to keep attackers out. Static defenses—classic, medieval castle-style defense in depth—assumed a perfectly built security architecture would successfully lock bad actors out of a network. Only authenticated users with correct permissions could pass the gates and guards—firewalls, intrusion detection, signature-based defenses against malware, and extensive and expensive logging. But real world data showed the static defense model was failing. Successful cyberattacks were increasing, not decreasing. What was going wrong?

The sanctity of a user’s identity lives at the center of a successful cybersecurity plan. While tactics may vary by sector and purpose, fundamentally we want the right user to get to the right data, and only that user and only that data. Identity demands authentication, and authorization permits access.

Verifying one’s identity through authentication (usually a password) became the primary target for offense and defense. We authenticate someone’s identity through three classically accepted methods:

  1. What you know,

  2. What you have, and

  3. What you are.

Really, what you are should be the first and final answer, but biometric infrastructure isn’t sufficient, and the privacy concerns are real. What you know is usually a password. What you have is a physical token or device generating a time-based code, typically used to counter someone stealing your password. What you are would be a biometric, such as a fingerprint, facial recognition, or retinal scan.

When a user authenticates their identity to a system for which they have authorization, the classic model was to trust that device (also known as an endpoint, which we will use interchangeably with devices going forward. Endpoint is a weird term used within the tech sector, used by no layperson I’ve ever met, but as it’s common parlance we’ll be using it here too).

Before zero trust, an endpoint was imbued with the authorization attributes of the human’s identity. In other words, once a human user authenticated their identity with the authorized system, the endpoint got the same access privileges the human was accorded. It became a “trusted endpoint.” If there were malware on that endpoint, or the device were stolen, or someone simply read over your shoulder then oops. The trusted endpoint became an unauthorized access method to data and systems.

Because humans aren’t in the machine, zero trust posits that we must become skeptics of a trusted endpoint. We cannot simply accept an identified and authenticated user is the sole entity involved in a session. We must instead assume the user is untrustworthy. We have zero trust.

Zero Trust in Practice

In practice, zero trust accepts user identity verification and authorization, and then challenges the veracity of a trusted session. Does the identity start performing unusual actions, such as accessing file systems they don’t normally or downloading unusual amounts of data? Is behavior occurring during different hours than normal? Is their geolocation appropriate? Perhaps, at a device level, is their typing pattern what we expect or when they’re walking is their gait and stride correct for the human behind the identity we’re allowing access for?

Disposable Components

These are mostly illustrative examples readily understandable for identity. Similar ones exist for the endpoints themselves: if we decide all endpoints are untrustworthy, then their access window must be minimized. We can compartmentalize the endpoint a user is on from its target system by placing their session into a segmented, isolated environment. Virtualization allows us to destroy session isolation environments and rebuild them from validated images over and over again. This method of disposable intermediate components means we do not need to trust that a user didn’t track mud, malware, or malicious actors into our clean environment when we granted them a session—at the end of their session we instead destroy the infrastructure they used. By destroying the intermediate endpoints, we delete malware and eject unauthorized users from our systems. Imagine the same practice in another field: medicine. Throwing away a medical glove is far easier than trying to scrub pathogens off our hands after treating a patient. We’ll still use soap and water (firewalls, intrusion detection and prevention, heuristic and signature-based defenses, etc.), but it’s far easier and less costly to prevent an infection rather than cure one when all it takes is a simple protective barrier.

Moving Target Defense

Let’s go back to that holy grail: a user’s identity. We talked about the original data we had to protect: their password. It’s what they know. But a password is not the only key piece of information an authorized user knows. They also know the location where they’re authenticating. This takes a moment to process, because we treat that information as commonplace. In our everyday lives we know the URLs of websites we visit, and then log into.

But take a step back and realize that if someone picks up a random password from the street, they don’t know where to use it. It’s not great the password is out there—they could write a script to start trying to log into every website out there—but that brute force guessing method is extremely time-expensive. Knowing where to log in means we know where to attack. Location data falls within the very first step of a successful attack, reconnaissance, on the dramatically named Cyber Kill Chain.

We deny the information needed to target an attack using a technology called moving target defense. When a session ends or an attack commences, we move the location of the entrance to the target systems. The attacker must find the entrance all over again before they can begin the process again. In the real world, this method of concealment and maneuver is commonplace: militaries user camouflage and highly mobile vehicles to evade detection and destruction. Nuclear submarines are prized far above missile silos. If, as a way of hiding in a conflict, you were asked to wear bright neon with a flashing light atop your head in the middle of an open field, and further told to broadcast your location—trusting your body armor to protect you—you would look at the requester with some askance. They’re surely mad. Yet we do precisely that all the time with static defenses and networks when we don’t safeguard the location of our critical systems.

Zero trust argues any information useful to accessing a system should be denied or destroyed at the earliest possible moment. By altering the entrance network topology, through the use of disposable components spread across and hidden within public cloud providers, we achieve the zero trust objective.

Zero trust emerged in the mid 2010s in response to continued failings of static defenses to keep attackers out. Static defenses—classic, medieval castle-style defense in depth—assumed a perfectly built security architecture would successfully lock bad actors out of a network. Only authenticated users with correct permissions could pass the gates and guards—firewalls, intrusion detection, signature-based defenses against malware, and extensive and expensive logging. But real world data showed the static defense model was failing. Successful cyberattacks were increasing, not decreasing. What was going wrong?

The sanctity of a user’s identity lives at the center of a successful cybersecurity plan. While tactics may vary by sector and purpose, fundamentally we want the right user to get to the right data, and only that user and only that data. Identity demands authentication, and authorization permits access.

Verifying one’s identity through authentication (usually a password) became the primary target for offense and defense. We authenticate someone’s identity through three classically accepted methods:

  1. What you know,

  2. What you have, and

  3. What you are.

Really, what you are should be the first and final answer, but biometric infrastructure isn’t sufficient, and the privacy concerns are real. What you know is usually a password. What you have is a physical token or device generating a time-based code, typically used to counter someone stealing your password. What you are would be a biometric, such as a fingerprint, facial recognition, or retinal scan.

When a user authenticates their identity to a system for which they have authorization, the classic model was to trust that device (also known as an endpoint, which we will use interchangeably with devices going forward. Endpoint is a weird term used within the tech sector, used by no layperson I’ve ever met, but as it’s common parlance we’ll be using it here too).

Before zero trust, an endpoint was imbued with the authorization attributes of the human’s identity. In other words, once a human user authenticated their identity with the authorized system, the endpoint got the same access privileges the human was accorded. It became a “trusted endpoint.” If there were malware on that endpoint, or the device were stolen, or someone simply read over your shoulder then oops. The trusted endpoint became an unauthorized access method to data and systems.

Because humans aren’t in the machine, zero trust posits that we must become skeptics of a trusted endpoint. We cannot simply accept an identified and authenticated user is the sole entity involved in a session. We must instead assume the user is untrustworthy. We have zero trust.

Zero Trust in Practice

In practice, zero trust accepts user identity verification and authorization, and then challenges the veracity of a trusted session. Does the identity start performing unusual actions, such as accessing file systems they don’t normally or downloading unusual amounts of data? Is behavior occurring during different hours than normal? Is their geolocation appropriate? Perhaps, at a device level, is their typing pattern what we expect or when they’re walking is their gait and stride correct for the human behind the identity we’re allowing access for?

Disposable Components

These are mostly illustrative examples readily understandable for identity. Similar ones exist for the endpoints themselves: if we decide all endpoints are untrustworthy, then their access window must be minimized. We can compartmentalize the endpoint a user is on from its target system by placing their session into a segmented, isolated environment. Virtualization allows us to destroy session isolation environments and rebuild them from validated images over and over again. This method of disposable intermediate components means we do not need to trust that a user didn’t track mud, malware, or malicious actors into our clean environment when we granted them a session—at the end of their session we instead destroy the infrastructure they used. By destroying the intermediate endpoints, we delete malware and eject unauthorized users from our systems. Imagine the same practice in another field: medicine. Throwing away a medical glove is far easier than trying to scrub pathogens off our hands after treating a patient. We’ll still use soap and water (firewalls, intrusion detection and prevention, heuristic and signature-based defenses, etc.), but it’s far easier and less costly to prevent an infection rather than cure one when all it takes is a simple protective barrier.

Moving Target Defense

Let’s go back to that holy grail: a user’s identity. We talked about the original data we had to protect: their password. It’s what they know. But a password is not the only key piece of information an authorized user knows. They also know the location where they’re authenticating. This takes a moment to process, because we treat that information as commonplace. In our everyday lives we know the URLs of websites we visit, and then log into.

But take a step back and realize that if someone picks up a random password from the street, they don’t know where to use it. It’s not great the password is out there—they could write a script to start trying to log into every website out there—but that brute force guessing method is extremely time-expensive. Knowing where to log in means we know where to attack. Location data falls within the very first step of a successful attack, reconnaissance, on the dramatically named Cyber Kill Chain.

We deny the information needed to target an attack using a technology called moving target defense. When a session ends or an attack commences, we move the location of the entrance to the target systems. The attacker must find the entrance all over again before they can begin the process again. In the real world, this method of concealment and maneuver is commonplace: militaries user camouflage and highly mobile vehicles to evade detection and destruction. Nuclear submarines are prized far above missile silos. If, as a way of hiding in a conflict, you were asked to wear bright neon with a flashing light atop your head in the middle of an open field, and further told to broadcast your location—trusting your body armor to protect you—you would look at the requester with some askance. They’re surely mad. Yet we do precisely that all the time with static defenses and networks when we don’t safeguard the location of our critical systems.

Zero trust argues any information useful to accessing a system should be denied or destroyed at the earliest possible moment. By altering the entrance network topology, through the use of disposable components spread across and hidden within public cloud providers, we achieve the zero trust objective.

Zero trust emerged in the mid 2010s in response to continued failings of static defenses to keep attackers out. Static defenses—classic, medieval castle-style defense in depth—assumed a perfectly built security architecture would successfully lock bad actors out of a network. Only authenticated users with correct permissions could pass the gates and guards—firewalls, intrusion detection, signature-based defenses against malware, and extensive and expensive logging. But real world data showed the static defense model was failing. Successful cyberattacks were increasing, not decreasing. What was going wrong?

The sanctity of a user’s identity lives at the center of a successful cybersecurity plan. While tactics may vary by sector and purpose, fundamentally we want the right user to get to the right data, and only that user and only that data. Identity demands authentication, and authorization permits access.

Verifying one’s identity through authentication (usually a password) became the primary target for offense and defense. We authenticate someone’s identity through three classically accepted methods:

  1. What you know,

  2. What you have, and

  3. What you are.

Really, what you are should be the first and final answer, but biometric infrastructure isn’t sufficient, and the privacy concerns are real. What you know is usually a password. What you have is a physical token or device generating a time-based code, typically used to counter someone stealing your password. What you are would be a biometric, such as a fingerprint, facial recognition, or retinal scan.

When a user authenticates their identity to a system for which they have authorization, the classic model was to trust that device (also known as an endpoint, which we will use interchangeably with devices going forward. Endpoint is a weird term used within the tech sector, used by no layperson I’ve ever met, but as it’s common parlance we’ll be using it here too).

Before zero trust, an endpoint was imbued with the authorization attributes of the human’s identity. In other words, once a human user authenticated their identity with the authorized system, the endpoint got the same access privileges the human was accorded. It became a “trusted endpoint.” If there were malware on that endpoint, or the device were stolen, or someone simply read over your shoulder then oops. The trusted endpoint became an unauthorized access method to data and systems.

Because humans aren’t in the machine, zero trust posits that we must become skeptics of a trusted endpoint. We cannot simply accept an identified and authenticated user is the sole entity involved in a session. We must instead assume the user is untrustworthy. We have zero trust.

Zero Trust in Practice

In practice, zero trust accepts user identity verification and authorization, and then challenges the veracity of a trusted session. Does the identity start performing unusual actions, such as accessing file systems they don’t normally or downloading unusual amounts of data? Is behavior occurring during different hours than normal? Is their geolocation appropriate? Perhaps, at a device level, is their typing pattern what we expect or when they’re walking is their gait and stride correct for the human behind the identity we’re allowing access for?

Disposable Components

These are mostly illustrative examples readily understandable for identity. Similar ones exist for the endpoints themselves: if we decide all endpoints are untrustworthy, then their access window must be minimized. We can compartmentalize the endpoint a user is on from its target system by placing their session into a segmented, isolated environment. Virtualization allows us to destroy session isolation environments and rebuild them from validated images over and over again. This method of disposable intermediate components means we do not need to trust that a user didn’t track mud, malware, or malicious actors into our clean environment when we granted them a session—at the end of their session we instead destroy the infrastructure they used. By destroying the intermediate endpoints, we delete malware and eject unauthorized users from our systems. Imagine the same practice in another field: medicine. Throwing away a medical glove is far easier than trying to scrub pathogens off our hands after treating a patient. We’ll still use soap and water (firewalls, intrusion detection and prevention, heuristic and signature-based defenses, etc.), but it’s far easier and less costly to prevent an infection rather than cure one when all it takes is a simple protective barrier.

Moving Target Defense

Let’s go back to that holy grail: a user’s identity. We talked about the original data we had to protect: their password. It’s what they know. But a password is not the only key piece of information an authorized user knows. They also know the location where they’re authenticating. This takes a moment to process, because we treat that information as commonplace. In our everyday lives we know the URLs of websites we visit, and then log into.

But take a step back and realize that if someone picks up a random password from the street, they don’t know where to use it. It’s not great the password is out there—they could write a script to start trying to log into every website out there—but that brute force guessing method is extremely time-expensive. Knowing where to log in means we know where to attack. Location data falls within the very first step of a successful attack, reconnaissance, on the dramatically named Cyber Kill Chain.

We deny the information needed to target an attack using a technology called moving target defense. When a session ends or an attack commences, we move the location of the entrance to the target systems. The attacker must find the entrance all over again before they can begin the process again. In the real world, this method of concealment and maneuver is commonplace: militaries user camouflage and highly mobile vehicles to evade detection and destruction. Nuclear submarines are prized far above missile silos. If, as a way of hiding in a conflict, you were asked to wear bright neon with a flashing light atop your head in the middle of an open field, and further told to broadcast your location—trusting your body armor to protect you—you would look at the requester with some askance. They’re surely mad. Yet we do precisely that all the time with static defenses and networks when we don’t safeguard the location of our critical systems.

Zero trust argues any information useful to accessing a system should be denied or destroyed at the earliest possible moment. By altering the entrance network topology, through the use of disposable components spread across and hidden within public cloud providers, we achieve the zero trust objective.

From the Editor

We're raising the standard for factory optimization

See what makes Dispel better

Access Windows

Create Access Window

Access Windows (8)

Archived On

Requested on

Stephen Maturin

Approved

7/19/14

6/19/14

Jack Aubrey

Approved

7/19/14

6/19/14

2798

Savannah Nguyen

Approved

7/19/14

6/19/14

2798

Jacob Jones

Approved

7/19/14

6/19/14

2798

Kathryn Murphy

Rejected

7/19/14

6/19/14

2798

Albert Flores

Approved

7/19/14

6/19/14

2798

Jane Cooper

Approved

7/19/14

6/19/14

We're raising the standard for factory optimization

Discover the power of Dispel with a personalized demo and a free 30-day trial

Access Windows

Create Access Window

Access Windows (8)

Stephen Maturin

Approved

6/19/14

Jack Aubrey

Approved

6/19/14

2798

Savannah Nguyen

Approved

7/19/14

6/19/14

2798

Jacob Jones

Approved

7/19/14

6/19/14

2798

Kathryn Murphy

Rejected

7/19/14

6/19/14

2798

Albert Flores

Approved

7/19/14

6/19/14

2798

Jane Cooper

Approved

7/19/14

6/19/14

We're raising the standard for factory optimization

Discover the power of Dispel with a personalized demo and a free 30-day trial

Access Windows

Create Access Window

Access Windows (8)

Archived On

Requested on

Stephen Maturin

Approved

7/19/14

6/19/14

Jack Aubrey

Approved

7/19/14

6/19/14

Savannah Nguyen

Approved

7/19/14

6/19/14

2798

Jacob Jones

Approved

7/19/14

6/19/14

2798

Kathryn Murphy

Rejected

7/19/14

6/19/14

2798

Albert Flores

Approved

7/19/14

6/19/14

2798

Jane Cooper

Approved

7/19/14

6/19/14

61 Greenpoint Ave, Brooklyn, NY 11222

© 2015 - 2024 Dispel, LLC & Dispel Global, Inc | Dispel and logos are Reg. U.S. Pat. & Tm. Off