Recently I created the Enclave.NET project, and it shipped with a simple in-memory storage service. It’s not particularly useful in it’s current form, so I created a sample a couple weeks ago that used Azure Key Vault as the storage and crypto service.

But Why?

Of course, you might be wondering why you’d want to use Enclave.NET this way, since Key Vault already provides isolation given that it’s hosted far away from your service. The obvious answer is that maybe you wouldn’t use it this way — it IS isolated enough. That said, this does provide you with some extra separation because your app now doesn’t have access to the secrets to connect to Key Vault — we’re working on the premise that every little bit helps.

And because it’s my project and my site and it seemed like a neat little idea. 😀

So how exactly does this work?

If you remember previously the service is supposed to be hosted locally, but outside of your own application, and relies on an HTTPS channel to offload data protection. That means we have a service host. In this case it’s just a console application. You’d likely want this running in a Windows service or Linux daemon in production so it’s not constantly having to spin up.

The service host looks for the settings.json file and spins up a web host given the provided server and client certificates.

You’ll also notice there’s a transform type that lets you setup your own services. This just injects the Key Vault storage and processing services.

You can find the KeyVaultStorageService and the KeyVaultCryptoProcessor in the sample project.

Once those are set, you can call the service from the client, and at this point the client has no idea where the keys are, or how they’re being used.

The neat thing about this is that it provides a great abstraction between the internal workings of the crypto bits, and the code that needs to protect data.

Recently I came across an interesting parameter of the New-SelfSignedCertificate PowerShell cmdlet — the -Signer parameter. This parameter allows you to provide a reference to an already-existing certificate that can be used to sign the newly-created certificate. This turns out to be an extremely useful little feature because sometimes you just need chained certificates to test stuff (so much so that I wrote a library that does this for me).

Not surprisingly, this cmdlet is much easier to use.

$ca = New-SelfSignedCertificate -DnsName "My Certificate Authority" -CertStoreLocation "cert:\LocalMachine\My"
New-SelfSignedCertificate -DnsName "child.domain.com" -CertStoreLocation "cert:\LocalMachine\My" -signer $ca

And voilà.

Signed Certificate

There’s a common problem that many applications run in to when executing cryptographic operations, and that’s the fact that the keys they use tend to exist within the application itself. This is problematic because there’s no protection of the keys — the keys are recoverable if you get a dump of the application memory, or you’re able to execute arbitrary code within the application. The solution to this problem is relatively straightforward — keep the keys out of the application.

In order for that to be effective you need to also move the crypto operations out of the application too. This means any attacks on the application won’t yield the keys. This is the fundamental design for services like Azure Key Vault, Amazon CloudHSM, or any other HSM service or device. In fact, even Windows subscribes to this design with CNG keys using LSASS.

I was thinking about this problem over the weekend and realized there isn’t really any good reference architecture out there that shows you how to build this design into your application. The services I mentioned earlier do great jobs at protecting secrets, but they’re kind of designed with certain applications in mind — greenfield, cloud first, or disconnected. This can make it difficult to migrate existing applications to use these services, or maybe you just can’t use them for whatever reason (regulatory, legal, etc.). On top of all that, you don’t even know how to start.

So, I did something dumb. I built a reference implementation that lets you move crypto operations out of your application and into a separate process.

Introducing: Enclave.NET.

READ THIS SECURITY WARNING

This is a reference implementation. That means it’s not designed to handle production loads, and absolutely is not built to withstand attack. It’s a sample intended to show how you might offload certain operations. There are probably some horrible bugs in here and there might even be vulnerabilities.

Was that sufficiently scary enough?

The basic idea is simple. You have an application that hosts a service. The service has a set of commands available to it:

  • Generate key
  • Encrypt
  • Decrypt
  • Sign
  • Validate

Your application calls these commands with the necessary payloads, the service does the thing, and returns a result. The service is HTTP-based, and protected with pinned client certificates.

The crypto operations are real, backed by the jose-jwt library, but they’re ephemeral — the keys are just stored in memory. The idea is that you can inject your own implementations as you see fit, so any migrations you might undergo can be gradual and painless.

There are two classes that need to be implemented — the crypto operations class, and the storage service. You can use the built-in crypto operations class InMemoryCryptoProcessor at your own risk, but you absolutely need to implement the storage, lest you lose all the keys when the app shuts down.

Crypto Processor

Storage Service

You can modify the startup code on your own, or you can implement IStartupTransform and configure it:

Calling the service is simple:

For more information take a look at the sample app.

I started the Kerberos.NET project with a simple intention: be able to securely parse Kerberos tickets for user authentication without requiring an Active Directory infrastructure. This had been relatively successful so far, but one major milestone that I hadn’t hit yet was making sure it worked with .NET Core. It now works with .NET Core.

Porting a Project

There is no automated way to port a project to .NET Core. This is because it’s a fundamentally different way of doing things in .NET and things are bound to break (I’m sure that’s not actually the reason). There is documentation available, but it’s somewhat high-level and not necessarily useful when you’re in the weeds tracking down weird dependency issues. Given that, you’re kinda stuck just copying over the code and doing the build-break-tweak-build-wtf dance until you have something working. This was my process.

  1. Create a new .NET Standard project — Standard dictates the API level the library is compatible with; Core dictates the runtime compatibility. I didn’t have any particular targeting requirements, so I made it as broad as possible. Now any runtime supporting .NET Standard can use this library.
  2. Copy all the code from the main project into the new project — I probably could have created the .NET Standard project in the same location, but it’s often easier to start with a blank slate and move things in.
  3.  Build!
  4. Build fails — MemoryCache/ObjectCache don’t exist in the current API set. Thankfully this is isolated to just the Ticket Replay detection, so I was able to temporarily convert it to a simple dictionary. I eventually found a library that replaces the original caching library.
  5. Build fails again — SecurityIdentifier doesn’t exist in the current API set either. Doh! I wasn’t going to hold my breath waiting for this to be moved over, so I created my own class that had the same usefulness. This also gave me the opportunity to fix some ugly code required to merge RIDs into SIDs, which added a nice little performance boost.
  6. Build succeeds!
  7. Unload/remove the original .NET 4.6 projects from the solution.
  8. Adjust test project dependencies to use the new project instead.
  9. Run!

Once I was able to get the test projects up and running I could run them through the test cases and verify everything worked correctly. Everything worked, except for the AES support. 😒

Porting a Project with a Dependency

I added support for AES tickets a while back and it was built in such a way that it lived in a separate package because it had an external dependency on BouncyCastle. I’m not a fan of core packages taking dependencies on third parties, so I built it as an add-on that you could wire-in later. This worked relatively well, until I needed to migrate to .NET Core.

It turns out there are a number of Core/Standard/PCL packages for BouncyCastle, but what’s the right one? Weeeeelll, none of them, of course!

At this point I decided to suck it up and figure out how to make SHA1 do what I want. One option was to muck with the internals of the SHA1Managed class with reflection, but that turned out to be a bad idea because the original developers went out of their way to make it difficult to get access to intermediate values (there are philosophical arguments here. I don’t fault them for it, but it’s really frustrating). I considered rewriting the class based on the reference source, but that too was problematic for the same basic reason. Eventually I ended up porting the BouncyCastle implementation because it was relatively self-contained, and already worked the way I needed.

Security note: You should never trust crypto code written by some random person you found on the internet. That said, there’s a higher chance of finding a vulnerability in other parts of the code than with the port of this algorithm, so…

This actually works out well because now all the code can go back into a single package without any dependencies whatsoever. Neat!

Porting a Nuget Package

The nuget pieces didn’t really change much, but now the manifest is defined in the project file itself, and packages are built automatically.

Simpler Package Management

Now the package is just an artifact of the build, which will be useful if/when I ever move this to an automated build process.

Active Directory has had the ability to issue claims for users and devices since Server 2012. Claims allow you to add additional values to a user’s kerberos ticket and then make access decisions based on those values at the client level. This is pretty cool because you normally can only make access decisions based on group membership, which is fairly static in nature. Claims can change based on any number of factors, but originate as attributes on the user or computer object in Active Directory. Not so coincidentally, this is exactly how claims on the web work via a federation service like ADFS.

Of course, claims aren’t enabled by default on Windows for compatibility reasons. You can enable them through Group Policy:

Computer Configuration > Policies > Administrative Templates > System > KDC > KDC support for claims, compound authentication and Kerberos armoring

Allow Claims in Group Policy

You can configure claims through the Active Directory Administrative Center Dynamic Access Control.

Active Directory Administrative Center Dynamic Access Control

You can see the Claim Types option on the left hand menu. From there you can add a claim by hitting the New > Claim Type menu.

The configuration is pretty simple. Find the attribute of the user or device you want to issue, and select whether it should be issued for users or computers or both. You can configure more advanced settings below the fold, but you only need to select the attribute for this to work.

Add New Claim Type

Once the claim type is configured you can modify attributes of a user through the Attribute Editor in either ADAC or the AD Users and Computers console.

User Attributes in ADAC

That’s all it takes to get use claims. You do have to sign out and back in before these claims will take effect though since Active Directory issues claims in the Kerberos tickets, and the tickets are only issued during sign in (or the myriad other times it does, but sign out/in is the most effective).

However, once you’ve signed out and back in you can pop open PowerShell and see the claims in your token:

[System.Security.Principal.WindowsIdentity]::getcurrent().claims | fl type, value

Type : ad://ext/department:88d4d68c39060f49
Value : Super secret division

Windows 8 and higher automatically extract the claims from the PAC in the ticket and make them available in the user token. Additionally, .NET understands claims natively and can extract them from the Windows token.

And of course now Kerberos.NET! The library will automatically parse any user or device claims and stick them in to the resultant claims produced during authentication:

Kerberos Claims

No configuration necessary. The library will do all the work. Enjoy!