Kerberos.NET is built to be used across multiple platforms, however there are some caveats.

Historically people have tended to think .NET is Windows-only and therefore any library built on it is only going to run on Windows. This hasn't been the reality for a few years now and .NET will run mostly anywhere these days. Kerberos.NET will run on any supported .NET Standard 2.0 runtime, which today is Windows, Linux, and Mac OS. The recommended method is using the nuget package.

PM> Install-Package Kerberos.NET

However, there are some important caveats.

Cryptographic Support

The cryptographic primitives used by the Kerberos protocols are available in .NET across all platforms, however there are extensions to the protocol that rely on primitives that are not exposed at the framework level. There are two extensions that do not work for time being:

  1. RFC 4757: RC4-HMAC -- This would be better described as the MD4/RC4/HMACMD5 suite. It's the defacto suite introduced in Windows 2000 for compatibility reasons and has since been deprecated by the standards bodies. The MD4 hash is not exposed through .NET (for good reason) and must be P/Invoke'd or written in managed code. It's not a particularly complicated hashing algorithm, but it's also not worth taking on for the sake of cross-platform support. You're not affected in any way if you're using AES (as you should be).
  2. RFC 4556: PKINIT -- This relies on a Diffie-Hellman key agreement, which is also not exposed through .NET (also for good reason). This is somewhat more problematic because it's an important extension and should be supported everywhere. It's not a particularly complicated algorithm and in fact there's a poorly written implemention in the test harness. This needs to be better before it would be considered for library support. This might also be moot once Elliptic Curve support is added.

Adding Your own Cryptographic Implementation

If you're so inclined and need either of the above two extensions you can bring your own implementation by registering your own platform layer:

CryptoPal.RegisterPal(() => new MyCustomCryptoPalImplementation());

It needs to be called before any crypto calls occur, otherwise you're going to defaulted to the existing platform implementation.

DNS Resolution Support

Kerberos relies on DNS for resolving KDC locations by checking for SRV records of the Kerberos service at _kerberos.tcp.realm.net (or UDP). This is necessary if you don't know the location of the KDC you're needing to authenticate against. Unfortunately .NET doesn't expose a way to query SRV records, and the library relies on P/Invoke into the Windows DnsQuery_W function to do this. You can bring your own DNS query logic by overriding the QueryDomain method on the client transport classes TcpKerberosTransport, UdpKerberosTransport, and HttpsKerberosTransport.

Unit Tests

The unit tests are inherently Windows-centric and P/Invoke for a myriad of interop and compatibility reasons. This isn't a problem for developers just grabbing the nuget package, but it is a bit problematic for anyone wanting to extend the library or fix the above limitations on non-Windows platforms. Eventually the tests may be reorganized and tagged as platform-specific, but that's not a high priority at the moment.

The Kerberos.NET library has undergone significant redevelopment over the last year and has introduced many new features across both client and server. This post dives deep into how this project is designed and intended to be used.

The Kerberos.NET library is a standards-compliant-ish implementation of [RFC 4120] the Kerberos network authentication service. It also includes bits and pieces of:

There's no guarantee the implementations of these additional protocols is strictly compliant, but they are implemented enough to be compatible with most major vendors.

The library itself is built against .NET Standard 2.0 to keep it compatible with .NET Framework. It has a Nuget package:

PM> Install-Package Kerberos.NET

Architecture

There are three logical components of the Kerberos.NET library: client, KDC, and application server. Each component relies on shared services for things like cryptographic primitives or message encoding. The logical structure looks a little like this:

kerbnet.png

The Kerberos Client

The client is implemented as KerberosClient and can be extended through overriding methods and properties in derived types. The client is transport agnostic, meaning it relies on other classes to provide on-the-wire support.

There are three built-in transport types: TCP, UDP, and HTTPS. The TCP transport is the only one enabled by default because it's the most reliable in real networks. The HTTPS transport is actually an implementation of the [MS-KKDCP] proxy protocol, which is effectively just the Kerberos TCP messages wrapped in HTTP over TLS.

The client is intended to be scoped per-user and hides most of the protocol goo from the caller for the sake of simplicity. The fundamental flow is

  1. Get credentials for user (username+password or X509 Certificate)
  2. Call client.Authenticate(creds) to request a TGT (ticket-granting-ticket)
  3. Call client.GetServiceTicket(spn) to get a ticket to a specific service

The tickets are cached in the client instance but do not persist across instantiations (an exercise left to the caller).

Kerberos messages are generated by client and all the cryptographic manipulations for encryption and signing are handled by the KerberosCredential derived classes. There are three native credential types supported by this library (so far): KerberosPasswordCredential, KeytabCredential, and KerberosAsymmetricCredential. These credentials implement their named pre-authentication and key exchange flows.

Pa-Enc-Timestamp is the default pre-auth method and is implemented in the KerberosCredential base class.

Password Credential

Password credentials are used for providing the long-term encryption key for the Pa-Enc-Timestamp pre-auth data and AS-REP session key. Passwords require key derivation with salts as provided by the KDC in Krb-Error messages in order to reliably handle AES ciphers. 

Keytab Credential

Keytab credentials are optimizations on password credentials and do not require key derivation. This could be problematic for callers if the KDC decides to rekey and is only recommended in controlled environments.

Asymmetric Credential

Asymmetric credentials are RSA or ECC keys for use with PKINIT. They're often protected in hardware through smart cards. The PKINIT client auth flow only implements the Diffie-Hellman key exchange for forward-secrecy of session keys. DH keys are not reused, but callers can implement a caching mechanism by overriding the CacheKeyAgreementParameters and StartKeyAgreement methods. ECDH is not supported as of today (Dec 2019), but it's in the works.

The Key Distribution Center (KDC)

The KDC is a server implementation that authenticates users and issues tickets. The KDC implements AS-REQ and TGS-REQ message flows through message handlers, which receive messages through a protocol listener. The only built-in protocol listener is for TCP sockets. Protocol listeners are left to the caller to implement as it's most likely going to need to be customized for their use case. A sample KDC server can be located in the KerberosKdcHostApp project.

AS-REQ and TGS-REQ Message Handling

Messages are received by a KdcServer instance, which has message handlers registered against it. Messages are parsed by the instance by peeking at the first ASN.1 tag it sees. Each tag type is handled by dedicated handlers. There are two handlers built-in: KdcAsReqMessageHandler and KdcTgsReqMessageHandler. It's up to the message handler to validate for protocol correctness and respond as necessary.

Pre-Authentication

Pre-authentication is handled through a similar mechanism aptly named pre-auth handlers. A pre-auth handler is invoked after the first bit of the AS-REQ message handling has parsed the requested username and has looked up the user through the IRealmService. Pre-authentication requirements are determined based on properties of the found user object. If a user is found to require pre-auth and the requisite pre-auth data isn't present, an error is returned to the client.

User and Service Principals

This library doesn't have a strong opinion on what principals or services should look like and only requires a small amount of information to fulfill most requests. The logical flow of a full Kerberos authentication looks like this as far as principals are concerned:

  1. AS-REQ is received and the Body.CName is parsed
  2. The CName is passed to an IRealmService => IPrincipalService.Find()
  3. The result is IKerberosPrincipal which indicates what pre-auth types are enabled or required, what long term secrets it supports, and how to generate the user PAC
  4. The well-known service krbtgt is located through IRealmService => IPrincipalService.Find()
  5. A TGT is generated for the user and encrypted to the krbtgt service key and wrapped in an AS-REP message
  6. The AS-REP is returned to the caller
  7. TGS-REQ is received and the TGT is extracted from the message
  8. The TGT is decrypted by looking up the krbtgt service account key through IRealmService => IPrincipalService.Find()
  9. The TGS-REQ target service is located through IRealmService => IPrincipalService.Find()
  10. The user principal is validated against whether they can get a ticket to the service (by default always yes)
  11. A new ticket for the user is created and encrypted against the service key located in step 9.

The IRealmService is a simple service provider to easily expose principal queries and access realm settings.

The Application Server

The third component of this library is actually the original driver for building this library. Ticket validation is handled by an application service that has received a Kerberos ticket from a client such as web browser and wants to turn it into a ClaimsIdentity.

There are two parts two validation: decrypting the ticket and converting it into an identity.

Decrypting the ticket is handled through by KerberosValidator which decrypts the message, checks timestamps, and verifies it's not a replay.

Conversion to ClaimsIdentity is handled by KerberosAuthenticator which extracts principal data as well as PAC authorization data making up group memberships and claims.

The Privilege Attribute Certificate

The PAC is where authorization data like group memberships and claims are carried in Kerberos tickets. This is a Windows-specific detail and as such has some unique characteristics that don't match the rest of the Kerberos protocol.

The PAC is a sequentially ordered C-like structure containing ten or so fields of data. The primary structure is KERB_VALIDATION_INFO, which is an NDR-encoded message (also known as RPC encoding). The NDR marshaling is handled by the NdrBuffer class and relies on each encoded struct to provide their own marshal/unmarshal steps.

The ASN.1 Serializer

Kerberos is a protocol serialized using ASN.1 DER encoding. There aren't many good serializers out in the wild and those that are good aren't particularly developer-friendly. The .NET team has an ASN.1 library built into .NET, so I made a fork of that with some minors tweaks. The gist of these tweaks were:

  • Set default precisions that match Kerberos specs
  • Added GeneralString which Kerberos treats as IA5Encoding for historical reasons
  • Modified the generator to output classes instead of structs (probably the most heinous tweak)
  • Added message inheritance because a bunch of the messages are semantically the same

Messages are represented as entities during build time and are run through an XSLT transform producing strongly-typed classes. These classes are public and designed to be used as first-class citizens when interacting with the library. Encoding and decoding is considered an implementation detail and marked as internal.

Samples

All of the samples can be found under the root of the project on GitHub.

The /samples/ folder contains a handful of projects that show how you can use the KerberosAuthenticator to validate tickets.

The KerbDumpCore folder contains a useful tool the dump the contents of a Kerberos ticket if you have the the right key. It's useful to troubleshooting and debugging.

The KerberosClientApp folder contains a project that implements all the client side features for testing against a user-provided KDC.

The KerberosKdcHostApp folder contains a project that implements a a simple KDC server that listens on a TCP socket. Useful when used with the above client app. 

If you haven’t heard we’re lighting up Token Binding on all our important services. Azure AD is one of the first to take advantage of this. Check out what Alex and Pamela have to say on the topic.

One of the most important of these improvements is the Token Binding family of specifications which is now well on its way towards final ratification at the Internet Engineering Task Force (IETF). (If you want to learn more about token binding, watch this great presentation by Brian Campbell.)

At Microsoft, we believe that the Token Binding can greatly improve the security of both enterprise and consumer scenarios by making high identity and authentication assurance broadly and simply accessible to developers around the world.

Given how positive we believe this impact can be, we have been and continue to be deeply committed to working with the community for creation and adoption of the token binding family of specifications.

Now that the specifications are close to ratification, I’d like to issue two calls to action:

  1. Begin experimenting with token binding and planning your deployments.
  2. Contact your browser and software vendors, asking them to ship token binding implementations soon if they aren’t already.

I happen to own Token Binding in Windows and I have some thoughts on the matter as well…

If you’ve ever had an experience with Credential Providers in the past, you may think the title of this post is insane — it’s not.

I recently came across an interesting Github project that showed it was theoretically possible. I wanted to see if I could replicate the results myself without digging too deeply into how they did it. As it happens, I could.

I created a sample project that outlines how you can do it on Github: CredProvider.NET

.NET Credential Provider

Historically, Credential Providers have been complex components to write because they’re COM-based, and because they have weird restrictions you have to contend with since they run in the winlogon UI process. Both of these restrictions should make any sane developer run away — doubly so if you’re a .NET developer. However, if you’ve been around long enough, you may remember that COM-interop thing that makes it possible to call .NET code from COM components (and conversely, call COM components from .NET) and have the runtime do a whole host of evil to keep the two relatively stable.

The trick with interop is getting the interfaces right. Everything communicates through interfaces — it’s the only way to guarantee a stable contract between APIs. This is the crux of our implementation. You can codify them manually through MSDN definitions, but that’s error prone and not necessarily completely accurate. Thankfully, the Windows SDK contains an IDL file that defines the interfaces we need. All we need to do is convert them to something .NET understands. We do this in a two step process where we first compile the IDL to a type library, and then convert the type library to a .NET assembly.

You can compile a type library using the midl.exe tool. You’ll need to install the Windows SDK first though, otherwise you won’t have the IDL file.

midl "C:\Program Files (x86)\Windows Kits\10\Include\10.0.16299.0\um\credentialprovider.idl" 
-target NT62 /x64

You can see I’m using the latest 10.0.16299.0 build of the SDK. This particular version requires you explicitly target a version for some reason — older versions of this IDL don’t require the targeting.

This produces a few files, but most notably is the credentialprovider.tlb file. This is the type library. We need to convert this to an assembly. Normally, you’d just use tlbimp.exe, but it’s notoriously opinionated on how you talk to COM, leaving you stuck in a few ways (for instance, it expects you to throw exceptions instead of returning HRESULTs, despite the interface requiring you to return HRESULTs). It turns out the CLR Interop team built a set of tools to help you through this pain — specifically tlbimp2.exe (why they didn’t just ship this capability in the original tool is a mystery).

TlbImp2.exe credentialprovider.tlb /out:CredProvider.NET.Interop.dll 
/unsafe /verbose /preservesig namespace:CredProvider.NET

The key is the /preservesig parameter.

Once you’ve created the assembly, you can import it into a project and start creating your credential provider.

Credential Providers are made up of two core interfaces:

  • ICredentialProvider — Manages the interaction between the UI
  • ICredentialProviderCredential — tells the UI how to receive credentials from the user

There are a few related interfaces, but they are just there to enhance the capabilities of the credential provider. Once you implement your interfaces you need to decorate the class with some attributes to tell what’s exposed as COM implementations:

 

You only need to do this on the ICredentialProvider interface.

At this point you may notice a few annoying things. First, not all the interfaces are present in the assembly. A few interfaces were added in the Windows 8.0 timeframe, specifically the ICredentialProviderCredential2 interface, that allows you to follow the newer design guidelines. Second, some of the typedef’s don’t translate correctly. For instance, HBITMAP gets mangled. Fixing these issues requires a little surgery on the credentialprovider.idl file.

First, you need to move the library declaration to the beginning of the file. It’s already present somewhere in the middle of the file (search “library CredentialProviders”). Just move it to the beginning of the file so it includes all the extra interfaces. Second, find and replace any mangled pointer to HANDLE (e.g. HBITMAP* to HANDLE*). Now you’ll need to recompile with midl, and regenerate the library with tlbimp2.

You can see how I did it with this diff.

Unrelated, but you may notice a number of things were taken out. This is because I modified the file without making a backup (doh!), and had to search for the original online and could only find the first release (I suppose I could have just reinstalled the SDK, but that’s beside the point). Anyway, notice how there’s an ICredentialProviderCredential3? I suspect that’s how Windows Hello does the eye-of-Sauron graphic when it’s looking for you.

Windows Hello

You may also want to explicitly register for COM interop on build too. This will make the provider available on each build, but the downside is that it requires running Visual Studio as admin.

Register for COM interop

You need to tell Windows you’ve got a Credential Provider once it’s registered. You do this by adding it to the registry.

[HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\Authentication\Credential Providers\{00006d50-0000-0000-b090-00006b0b0000}]
@=”Credential Provider.NET”

Notice that the GUID matches the Guid in the class attribute. This is intentional. This tells Windows that the COM component with that given guid is a Credential Provider, and it should new-up an instance of it.

The other thing you’ll want to figure out is how to test this thing. You’ll notice all the images above show it being used within a user session, prompted via CredUIPromptForWindowsCredentials. This is intended because testing it on your own machine’s winlogon interface is an exercise in madness. Unhandled errors that cause your provider to crash will render the logonUI unusable and, well, you’re screwed. You have to boot into safe mode and remove the provider registration, but the provider is still loaded so you can’t log into safe mode! So you need to boot into recovery mode, and… — have you made a recovery image? Using CredUIPromptForWindowsCredentials will let you test your code easily and quickly, and it generally behaves like winlogon. Lastly, test it on a VM once it’s ready for more in depth testing.

And that’s all it takes to start writing a Credential Provider in .NET.

Some Notes

It’s worth noting that this is possibly (probably?) a bad idea. I don’t know how stable this will be in the wild.

Some References

I came across this Stack Overflow post in my attempts to figure out the ICredentialProviderCredential2 problem, which lead to this sample. Their recommendation to use tlbimp2 was a huge revelation.

Recently I created the Enclave.NET project, and it shipped with a simple in-memory storage service. It’s not particularly useful in it’s current form, so I created a sample a couple weeks ago that used Azure Key Vault as the storage and crypto service.

But Why?

Of course, you might be wondering why you’d want to use Enclave.NET this way, since Key Vault already provides isolation given that it’s hosted far away from your service. The obvious answer is that maybe you wouldn’t use it this way — it IS isolated enough. That said, this does provide you with some extra separation because your app now doesn’t have access to the secrets to connect to Key Vault — we’re working on the premise that every little bit helps.

And because it’s my project and my site and it seemed like a neat little idea. 😀

So how exactly does this work?

If you remember previously the service is supposed to be hosted locally, but outside of your own application, and relies on an HTTPS channel to offload data protection. That means we have a service host. In this case it’s just a console application. You’d likely want this running in a Windows service or Linux daemon in production so it’s not constantly having to spin up.

The service host looks for the settings.json file and spins up a web host given the provided server and client certificates.

You’ll also notice there’s a transform type that lets you setup your own services. This just injects the Key Vault storage and processing services.

You can find the KeyVaultStorageService and the KeyVaultCryptoProcessor in the sample project.

Once those are set, you can call the service from the client, and at this point the client has no idea where the keys are, or how they’re being used.

The neat thing about this is that it provides a great abstraction between the internal workings of the crypto bits, and the code that needs to protect data.