Tl;dr; It’s really not. Kerberos is showing its age, but it has served us well over the years. As we build new protocols we should remember all the things we got right with it, and account for all the things we got wrong.

Tl;dr; It’s really not. Kerberos is showing its age, but it has served us well over the years. As we build new protocols we should remember all the things we got right with it, and account for all the things we got wrong.

A bit of a Background on Kerberos

Kerberos is an authentication protocol. It allows a party (A) to prove to another party (B) they are who they say they are by having a third party (C) vouch for them.

In technical terms this means a principal can prove to another principal who they are by authenticating themselves to an authentication server.

In practical terms a user can log in to their application by authenticating to Active Directory using a password.

This protocol is built on a set of message exchanges.

The following is mostly accurate, but a few liberties have been taken for the sake of simplification. All users are principals, but not all principals are users. All principals can authenticate to all principals, but the type of principal dictates properties of messages.

Here we’re looking at a standard user principal authenticating to a service principal for the simplest explanation as it’s the most common.

Authenticating to the Authentication Server

The first message exchange is between the (user) principal and the authentication server.

The user creates a message, encrypts it using their password, and sends it to the authentication server. The server has knowledge of the user’s password and can decrypt the message, proving to each party they are who they say they are respectively (within the limit of how easy it is to guess or steal this password). The server then generates a new message containing a ticket and a session key, encrypted against the user password, returning it to the user.

Authentication Service Exchange

The user can decrypt the message and now has a ticket and session key. The session key can be used to secure any future communications between the two parties, and the ticket is used as evidence of authentication for future requests requiring authentication. This ticket is called the ticket granting ticket (TGT) — you use it to get other tickets to services.

The TGT contains identifying information about the user as well as a copy of the negotiated session key, and is encrypted to the password of a special account named krbtgt. Only the authentication server knows the password for krbtgt.

Authenticating to the Service

The second message exchange is between the (user) principal and (service) principal. It’s a bit more complicated than the first exchange.

A protocol higher in the stack indicates to the user that the service requires authentication. Kerberos doesn’t much care; it’s up to the user to initiate the exchange. The user determines the service principal name (SPN) of this service and generates a new message to the ticket granting server asking for a service ticket.

Service Ticket Exchange

This message from the user to the ticket granting server contains the ticket-granting-ticket (TGT) from the first exchange. The message is encrypted to the previously negotiated session key. The ticket granting server also has knowledge of the krbtgt password and so can decrypt the TGT and extract the session key.

The ticket granting server can then validate the message and generate a response containing a service ticket for the service principal containing identifying information from the TGT The service ticket is encrypted against the password of the service principal. Only the ticket granting server and service principal know this password. The encrypted ticket is wrapped in a message that is then encrypted using the previously negotiated session key and returned to the user.

The user is able to decrypt the message using the session key and can forward the service ticket to the service. The user can’t see the contents of the ticket (and therefore can’t manipulate it). All the user can do is forward it. The service receives the encrypted service ticket and decrypts it using its own password. The service now knows the identification of the user. The service can optionally respond with a final message encrypted with the subsession key to prove receipt or require a seperate subsession key.

Protected Application Layer Exchanges

Now that the service has verified the user and the user has verified the service it’s possible for them to use a key derived from the subsession key to safely continue communications between the user and the service.

The Pro’s and Con’s of Kerberos

It shouldn’t come as a surprise that Kerberos has many positive and negative traits.

The Pro’s

Kerberos is Cryptographically Sound

This turns out to be a useful property of an authentication protocol. You can generally prove the cryptographic promises of each leg are secure without making too many assumptions.

A principal’s identity is proved by the secret properties of the password and hardness of the encrypting algorithm; the identity of the server is proven conversely by knowing the password and being able to decrypt the message without having to reveal the password to any party. This is an often overlooked property of most authentication protocols.

The multileg exchange is provably secure by guaranteeing only the final recipients can decrypt messages; messages can’t be modified because they’re signed by the various authoritative sources; and messages can’t be replayed because they have counters.

All of these properties accrue to a confidential, tamper-evident, authenticated, non-repudiated authentication protocol. Most other protocols don’t have all these properties and rely on external protocols to provide these guarantees.

Cryptographic Properties Extend to Services

Services that rely on the derived session keys apply the properties of the authenticated exchanges to the derived key. This is non-trivial to get right in the best of cases so having it available to you and bound to your authentication service is useful.

It’s Over-Specified

The Kerberos V5 RFC is mostly complete. It provides an end-to-end solution for multi-party authentication with solutions for transport, retry semantics, supported credential types, etc. This is useful because it limits one’s ability to break interoperability while still being true to the specification. A specification is a great equalizer. You either built it correctly or you didn’t.

Implementors will invariably break interop one way or another, but the break will be contravened by the spec. There is utility in being able to go to the implementor, point to the gap, and ask them to fix it.

There are of course areas where there are gaps; no spec is perfect. However, there are at least 12 recognized specs intended to ratify these gaps and introduce improvements.

It has Provisions for Authenticating Both Parties

Most multi-party federation protocols only dictate how you authenticate the final leg of the trip, meaning from the identity provider (authentication service) to the relying party (service). This means you’re left on your own to figure out how to authenticate to the identity provider. This is not a bad thing, but it starts placing assumptions on the properties of the authenticated identity. This leads to questionable interop.

Kerberos is pretty clear about how a user authenticates to the authentication service to get tickets. The implementor doesn’t need to design their own system.

It’s Ubiquitious

Windows Integrated Authentication relies heavily on Kerberos; this means means any application running on Windows can authenticate users or services using Kerberos. Most other major operating systems have an ability to do Kerberos, either natively, or through third party implementations like Heimdal or MIT.

Applications can opt in to Kerberos authentication using SSPI or GSS API’s with relatively little or no effort. High level development frameworks often wrap these API’s further to simplify usage.

Most organizations have Active Directory, which is amongst other things an implementation of the Kerberos AS and TGS.

This generally means you wont be seriously limiting yourself if all you relied on for authentication was Kerberos.

Kerberos Credential Protection is Well Understood

There is a fairly large gap in the second exchange, where if an attacker can steal the TGT, they can take that TGT and use it on other clients. This means they can impersonate the user as long as the TGT is valid, and all the services involved really won’t know any better.

This is an unfortunate side effect most authentication systems have because you’re exchanging an expensive authentication process (user entering password) with an inexpensive process (caching a TGT locally) to optimize for performance and user annoyance.

There are ways to solve this problem, which is generally described as proof-of-posession, where you stamp the ticket with a proof of work. The entity receiving the ticket can request you prove ownership by doing work that only you can do and is incredibly difficult for an attacker to forge. The simplest form is by signing a challenge with a secret only the user knows, such as with a private key stored on a smart card or FIDO2 device.

But down the rabbit hole we go because now the verifier needs to know the public key ahead of time, and… We’ve increased complexity of the system making it more fragile and difficult to use correctly.

It’s like entering a carnival. You can either pay each ride independently, which is annoying, time consuming, and potentially dangerous for all parties, or you can go to the ticket booth, buy a bunch of tickets, and use those tickets as evidence you paid money. It’s easier and safer, and it’s not the end of the world if an attacker steals your tickets because they’re only useful for a short period of time.

Given that this is a well known issue, we’ve done a lot of work making sure you can’t steal these tickets. In Windows we have things like centralized protection of secrets in the LSA, which moves any dangerous secrets out of a given application (making it hard to steal secrets) as well as Credential Guard, which uses virtualization-based security to move all the important secrets to a seperate virtual machine so compromising LSA won’t get you anything (making it extremely difficult to steal secrets).

Conversely other authentication protocols are less well understood, primarily because they haven’t been around as long and aren’t as widely adopted.

Cross-Realm Trusts Let you Create Security Boundaries

Kerberos has a concept of cross-realm authentication. This allows two organizations to trust the identities issued from the either organization. They can be directional or bidirectional meaning organization A might trust identities from organization B, but B won’t trust A.

This has untold utility, because you can isolate resources based on the amount of security required by the resource. There aren’t many authentication protocols that have a built-in capability for this without presenting raw credentials.

The Con’s

It’s easy to find fault in things so this is primarily about the big issues people have complained about in the past. We’re not talking minor nits.

The Complexity might Kill You

There’s an old joke: I went to explain Kerberos to someone and we both walked away not understanding it.

My reaction trying to understand the protocol the first, second, third, and forth time. Fifth time’s the charm, right?

Exhibit A: See the first section.

Joking aside, this is a complicated protocol to understand completely. It doesn’t matter to end users so much, but it seriously degrades ones ability to troubleshoot failures, verify security guarantees, or extend in any meaningful way.

Cross-Realm Trusts are Complicated

Cross-realm trusts can be transitive, meaning if A trusts B and B trusts C, A could trust C. In principle this is simple and has a logic to it, but in practice it’s difficult to fit in your head and reason about.

It’s also difficult to reason about authorization to resources. Presenting an identity from another realm and trusting it hasn’t been tampered with is one thing, but deciding that the claims presented about the user should be considered during authorization decisions is something completely different. You are often forced to explicitly set rules within the resource realm instead of implicitly trusting the information because the authorization rules are contextually relevant.

(Unconstrained) Delegated Authentication needs to Die

Once a service has a users identity, it’s useful to forward that identity to another service as proof that the first service is operating on behalf of the user. This means you don’t end up with applications having the highest possible permissions any user will need and instead just rely on the user having the necessary permission. Compromising downstream resources via the application requires having an active user versus having unfettered access.

Kerberos got delegation wrong though. It lets the application operate as the user and access further resources on behalf of that user, but it’s not constrained to specific services. An application might only need access to a single database server, but when given delegation rights, it can access any resource as that user.

However, we have a solution to this which is constrained delegation and resource-based constrained delegation, which specifically lists which principals can be delegated to, as well as what services can receive those delegated tickets.

The problem is that most people don’t understand how constrained delegation works which tends to lead to falling back to unconstrained delegation.

Cryptographic Primitives are Starting to Smell

DES was the standard when Kerberos was first published. It’s still supported by all the major implementations in one form or another and most enterprises still have it enabled.

RC4 is still kicking and is often the default especially when trying to work across multiple vendors.

Key derivation at its lowest form is still generally based on MD4.

Integrity is based on a truncated SHA1 HMAC in most implementations. SHA256 has recently been standardized, but see the next few issues.

Ossification makes Changing Cryptographic Primitives Impossible

We learned this with TLS 1.3: protocols don’t like change. Things start to harden and become brittle when implementations make assumptions about protocol designs. Every implementation makes assumptions in one way or another and often they become difficult to fix.

Fundamental overhauls break everything even if the original protocols were designed to support future changes.

Changes need Critical Mass before they can be Turned on by Default

Turning on a new cryptographic primitive like SHA256 probably won’t break much, but it can’t be default until everyone supports it. It will take a decade before most implementations support it, and who knows how long it’ll be before it can be turned on by default, because there’s always some box that won’t support it.

ASN.1 Data Formatting Suuuuuuuuuuucks

This is a personal gripe. I dislike ASN.1 because it’s a complex serialization format. It’s difficult to parse and more difficult to generate.

This is a problem because it limits one’s abilities to build Kerberos implementations and results in only a handful of libraries that are feature rich. This has the unfortunate side effect that fewer people understand the protocol (coupled with complexity) and fewer still contribute to its long term development.

Kerberos Message Structures have a Range of Serialization Formats

Another gripe of mine. There are multiple serialization formats in use depending on the type of data structure you’re looking at. Kerberos itself uses ASN.1, but MS-KILE (Microsoft’s specification of deltas) uses RPC (NDR) serialization for authorization structures. This is mostly just annoying having to flip between formats, but complicates interop.

All the Crypto is Symmetric

Kerberos is a protocol that was built back when asymmetric cryptography was too expensive to use routinely or securely. It relies entirely on shared secrets between all parties, which means anyone that’s verifying a message can spoof a message. This can be negated by asymmetric signing.

PKINIT is an extension to Kerberos that allows users to execute the first message exchange using asymmetric cryptography, but all future exchanges still rely on symmetric secrets.

We actually built a variation of Kerberos that uses asymmetric crypto called PKU2U, but it’s designed for user-to-user authentication — i.e. without a trusted third party. It was originally built for HomeGroup and is now used for point-to-point authentication using Active Directory credentials.

Initial Authentication Generally Assumes Passwords

The last point suggested PKINIT can be used for authentication, and it can, but this assumes the application knows how it works and supports certificates. It turns out this isn’t as common as you’d think, and applications tend to just prompt for passwords. There will always be legacy applications that just work that will never be updated.

The other problem here is that you’re still limited to passwords or RSA-based asymmetric keys. We’re trending towards ECC asymmetric crypto with FIDO and it’s impossible to fit that into an exchange that is universally (or at least critically) supported let alone other arbitrary authentication messages.

It should be pointed out that using passwords does have its benefits. They are incredibly simple to use when machine generated and managed.

Line of Sight to Authentication Servers is Mandatory

Line of sight to the authenticating server is often a requirement in authentication protocols, but Kerberos services tend to live only within private networks. Whether you should move them to the open internet is up for debate (actually it’s not — don’t do it), but most organizations choose not to do it and would rather rely on a service born on the internet if needed.

So What’s the Takeway?

There are certainly a lot of problems with Kerberos — they are the criticisms from heavy adoption. The protocol still has value, but it’s starting to show its age and we need to look forward to what replaces it. As we move away from it we need to remember that Kerberos got quite a few things right and learn from that.

I started the Kerberos.NET project with a simple intention: be able to securely parse Kerberos tickets for user authentication without requiring an Active Directory infrastructure. This had been relatively successful so far, but one major milestone that I hadn’t hit yet was making sure it worked with .NET Core. It now works with .NET Core.

Porting a Project

There is no automated way to port a project to .NET Core. This is because it’s a fundamentally different way of doing things in .NET and things are bound to break (I’m sure that’s not actually the reason). There is documentation available, but it’s somewhat high-level and not necessarily useful when you’re in the weeds tracking down weird dependency issues. Given that, you’re kinda stuck just copying over the code and doing the build-break-tweak-build-wtf dance until you have something working. This was my process.

  1. Create a new .NET Standard project — Standard dictates the API level the library is compatible with; Core dictates the runtime compatibility. I didn’t have any particular targeting requirements, so I made it as broad as possible. Now any runtime supporting .NET Standard can use this library.
  2. Copy all the code from the main project into the new project — I probably could have created the .NET Standard project in the same location, but it’s often easier to start with a blank slate and move things in.
  3.  Build!
  4. Build fails — MemoryCache/ObjectCache don’t exist in the current API set. Thankfully this is isolated to just the Ticket Replay detection, so I was able to temporarily convert it to a simple dictionary. I eventually found a library that replaces the original caching library.
  5. Build fails again — SecurityIdentifier doesn’t exist in the current API set either. Doh! I wasn’t going to hold my breath waiting for this to be moved over, so I created my own class that had the same usefulness. This also gave me the opportunity to fix some ugly code required to merge RIDs into SIDs, which added a nice little performance boost.
  6. Build succeeds!
  7. Unload/remove the original .NET 4.6 projects from the solution.
  8. Adjust test project dependencies to use the new project instead.
  9. Run!

Once I was able to get the test projects up and running I could run them through the test cases and verify everything worked correctly. Everything worked, except for the AES support. 😒

Porting a Project with a Dependency

I added support for AES tickets a while back and it was built in such a way that it lived in a separate package because it had an external dependency on BouncyCastle. I’m not a fan of core packages taking dependencies on third parties, so I built it as an add-on that you could wire-in later. This worked relatively well, until I needed to migrate to .NET Core.

It turns out there are a number of Core/Standard/PCL packages for BouncyCastle, but what’s the right one? Weeeeelll, none of them, of course!

At this point I decided to suck it up and figure out how to make SHA1 do what I want. One option was to muck with the internals of the SHA1Managed class with reflection, but that turned out to be a bad idea because the original developers went out of their way to make it difficult to get access to intermediate values (there are philosophical arguments here. I don’t fault them for it, but it’s really frustrating). I considered rewriting the class based on the reference source, but that too was problematic for the same basic reason. Eventually I ended up porting the BouncyCastle implementation because it was relatively self-contained, and already worked the way I needed.

Security note: You should never trust crypto code written by some random person you found on the internet. That said, there’s a higher chance of finding a vulnerability in other parts of the code than with the port of this algorithm, so…

This actually works out well because now all the code can go back into a single package without any dependencies whatsoever. Neat!

Porting a Nuget Package

The nuget pieces didn’t really change much, but now the manifest is defined in the project file itself, and packages are built automatically.

Simpler Package Management

Now the package is just an artifact of the build, which will be useful if/when I ever move this to an automated build process.

Active Directory has had the ability to issue claims for users and devices since Server 2012. Claims allow you to add additional values to a user’s kerberos ticket and then make access decisions based on those values at the client level. This is pretty cool because you normally can only make access decisions based on group membership, which is fairly static in nature. Claims can change based on any number of factors, but originate as attributes on the user or computer object in Active Directory. Not so coincidentally, this is exactly how claims on the web work via a federation service like ADFS.

Of course, claims aren’t enabled by default on Windows for compatibility reasons. You can enable them through Group Policy:

Computer Configuration > Policies > Administrative Templates > System > KDC > KDC support for claims, compound authentication and Kerberos armoring

Allow Claims in Group Policy

You can configure claims through the Active Directory Administrative Center Dynamic Access Control.

Active Directory Administrative Center Dynamic Access Control

You can see the Claim Types option on the left hand menu. From there you can add a claim by hitting the New > Claim Type menu.

The configuration is pretty simple. Find the attribute of the user or device you want to issue, and select whether it should be issued for users or computers or both. You can configure more advanced settings below the fold, but you only need to select the attribute for this to work.

Add New Claim Type

Once the claim type is configured you can modify attributes of a user through the Attribute Editor in either ADAC or the AD Users and Computers console.

User Attributes in ADAC

That’s all it takes to get use claims. You do have to sign out and back in before these claims will take effect though since Active Directory issues claims in the Kerberos tickets, and the tickets are only issued during sign in (or the myriad other times it does, but sign out/in is the most effective).

However, once you’ve signed out and back in you can pop open PowerShell and see the claims in your token:

[System.Security.Principal.WindowsIdentity]::getcurrent().claims | fl type, value

Type : ad://ext/department:88d4d68c39060f49
Value : Super secret division

Windows 8 and higher automatically extract the claims from the PAC in the ticket and make them available in the user token. Additionally, .NET understands claims natively and can extract them from the Windows token.

And of course now Kerberos.NET! The library will automatically parse any user or device claims and stick them in to the resultant claims produced during authentication:

Kerberos Claims

No configuration necessary. The library will do all the work. Enjoy!

It’s been a few months since there’s been any public activity on the project but I’ve quietly been working on cleaning it up and there’s even been a PR from the community (thanks ZhongZhaofeng!).

Part of that clean up process has been adding support for AES 128/256 tokens. At first glance you might think it’s fairly trivial to do — just run the encrypted data through an AES transform and you’re good to go — but let me tell you: it’s not that simple.

On Securing Shared Secrets

There’s primarily one big difference between how RC4 and AES are used in Kerberos, which is that AES salts the secret (DES actually does this too, but DES is dead), whereas RC4 just uses an MD4 hash of the secret. That means you need to think about how you store secrets when using the two algorithms. With RC4 you can either store the cleartext or the hash and you can still use it anywhere. With AES you need the cleartext secret any time you reconfigure things because the salt (more on this later) may necessarily change. That means the secret isn’t portable — this is a good thing. The problem though is that the salt uses information that isn’t available at runtime, meaning there’s an extra step involved when setting up the service. This is frustrating.

On Calculating Salts

A salt is just a value that makes a secret more unique. Generally they’re just random bytes or strings you tack on to the end of a secret, but in the Kerberos case it’s a special computed value based on known attributes. This actually serves a purpose because the salt can be derived from a token without needing extra information; the token would need to carry extra information if the salt was random, which might break compatibility or just increase the token size. The Kerberos spec is a bit vague about how this is supposed to work, but it’s basically just concatenating the realm with the principal names of the service:

salt = [realm] + [principalName0]...[principalNameN]
salt = FOO.COMserver01

This actually works well because it’s simple to compute and solves the portability problem. Both the ticket issuer and the ticket receiver know enough to encrypt and decrypt the token.

Enter Active Directory.

For reasons not entirely clear, Active Directory decided to do it a little differently. At least it’s documented. The difference is not trivial and introduces a piece of information that’s not in the request, which means you need prior knowledge for this to work — the new computation requires the samAccountName of service account without any trailing dollar signs.

salt = [realm.ToUpper()] + "host" + [samAccountNameWithoutDollarSign] + "." [realm.ToLower()]
salt =

Yeah, I don’t know either.

So now you have a couple options. You can either pre-compute the secret/salt combo one time and just store that somewhere, or store the two values and use them at runtime. Pre-computing the key isn’t a bad idea — it’s actually how most environments work, generating a keytab file. This means the secret doesn’t need to be known by the service, but it adds a deployment step.

I recommend just storing the computed value.

On Computing the Key

I mentioned previously that pre-computing the key means the secret isn’t known by the service. This is because the key itself is cryptographically derived from the secret and salt in an overly convoluted way:

tkey = random2key(PBKDF2(secret, salt, iterations, keyLength))
key = DK(tkey, "kerberos")

Basically concatenate the secret and salt and run it through the PBKDF2 key derivation function for a given number of iterations and output a key of a particular length needed by the AES flavor, and then run it through the Kerberos DK function with a known constant string “kerberos”. Oh, and the iterations count is configurable! That means different implementations can select different values AND IT’S NOT INCLUDED ANYWHERE IN THE REQUEST BUT THAT’S OKAY JUST USE THE DEFAULT OF 4096. Oh yeah, and random2key does nothing.

On Decrypting the Token

Decrypting the token is a relatively tame process comparatively. You take that computed key and run it through that DK function again, this time including the expected usage (KrbApReq vs KrbAsReq etc.) and various other constants a few times and then run the ciphertext through the AES transform with an empty initialization vector (16 0x0 bytes). Of course, the AES mode isn’t something normal; it uses CTS mode, which is complicated and probably insecure — so much so that .NET doesn’t implement it. Thankfully it’s relatively easy to bolt on to the AES CBC mode.

You do get the real token once you run it through the decryptor though. But then you need to verify the checksum using a convoluted HMAC SHA1 scheme.

You may notice all this exists in a separate project. This is because the HMAC scheme requires access to intermediate bits of the SHA1 transform, which aren’t available using the built in .NET algorithms. Enter BouncyCastle. 🙁

Funny story: I originally thought I needed BouncyCastle for AES CTS support, but after finding their implementation broken (or more likely my usage of their implementation) I found out how to do it using the Framework implementation. If it wasn’t for the checksum I wouldn’t need BouncyCastle — doh!

That said, if anyone know how to do this using the native algorithms please let me know or fork and fix!

The resulting value is then handed back to the core implementation and we don’t care about the AES bits anymore (well, after repeating the same process above for the Authenticator).

On Cleaning Up the Code

I mentioned originally that I did some clean up as well. This was necessary to get the AES bits added in without making it hacky and weird. Now keys are treated as special objects instead of just as bytes of data. This makes it easier to secure in the future (encrypt in memory, or offload to secure processes, etc.).

I also added in PAC decoding because you get some pretty useful information like group memberships.

There’s still plenty to do to make this production worthy, but those are relatively simple to do now that all this stuff is done.

Microsoft recently released the Azure AD Single Sign On preview feature, which is a way to support Kerberos authentication in to Azure AD. The neat thing about this is that you don’t need ADFS to have an SSO experience if you’ve already got AD infrastructure in place. It works the same way as in-domain authentication, via a Kerberos ticket granting scheme.

This is a somewhat confounding feature for anyone who has experience with Kerberos in Windows because every party needs to be domain-joined for Kerberos to work. This doesn’t seem possible in the cloud considering its a) not your box, and b) it’s over the public internet. Now, you could create one gigantic AD environment in Azure and create a million trusts to each local domain, but that doesn’t scale and managing the security of it would… suck. So how does Azure AD do it?

It’s deceptively simple, actually. Kerberos is a federated authentication protocol. An authority (Active Directory — AD) issues a ticket to a user targeting the service in use which is then handed off to the service by the user. The service validates that the ticket came from a trusted source (the authority — AD) and converts it into a usable identity for the user. The validation is actually pretty simple.

  1. Both the service and authority know a secret
  2. The authority encrypts the ticket against the secret
  3. The service decrypts the ticket using the secret
  4. The ticket is validated if decryption succeeds

It’s a little more complicated than that, but that’s the gist.

Nothing about this process inherently requires being domain-joined — the only reason (well… not the only reason) you join a machine to a domain is so this secret relationship can be kept in sync.

With this bit of knowledge in hand, it’s not that big a leap to think you can remove some of the infrastructure requirements and have AD issue a ticket to the cloud. In fact, if you’ve had experience with configuring SSO into apps running on non-Windows boxes you know it’s possible because those boxes aren’t domain joined.

So enough background… how does this work?

During installation and configuration:

  1. A Computer object is created in AD representing Azure AD called AZUREADSSOACC
  2. Two Service Principal Names (SPNs) are added to the account
    • HOST/
    • HOST/
  3. A password is created for this account (the secret)
  4. The secret is sent to Azure AD
  5. Both domains of the SPNs are added as intranet zones to any machine doing SSO

Authentication works as follows:

  1. When a user hits and enters their username, an iframe opens up and hits the autologon domain
  2. The autologon domain returns a 401 with WWW-Authenticate header, which tells the browser it should try and authenticate
  3. The browser complies since the domain is marked as being in the intranet zone
  4. The browser requests a Kerberos ticket from AD using the current user’s identity
  5. AD looks up the requesting autologon domain and finds it’s an SPN for the AZUREADSSOACC computer
  6. AD creates a ticket for the user and targets it for the AZUREADSSOACC by encrypting it using the account’s secret
  7. The browser sends the ticket to the autologon domain
  8. The autologon domain validates the ticket by decrypting it using the secret that was synced
  9. If the ticket can be decrypted and it meets the usual validation requires (time, replay, realm, etc.) then a session key is pushed up to the parent window
  10. The parent window observes the session key and auto logs in

The only sort of synchronization that occurs is that initial object creation and secret generation. AD doesn’t actually care that the service isn’t domain joined.

Technically as far a AD knows the service is domain joined by virtue of it having a computer account and secret, but that’s just semantics. AD can’t actually manage the service.

That’s all there is to it.