LocalStorage for .NET

It happened once too often. For the last time I wrote another helper class that serialized<>deserialized some stuff to the filesystem, just so it could be restored in a new debug session. “What a bless localStorage for Javascript is…”, I thought to myself. And with that thought, I started to work on LocalStorage for .NET.

LocalStorage is an extremely lightweight, dependency-less library with a simple concept: allow any .NET object to be stored in an in-memory store – and allow it to be persisted to filesystem. It serves the purpose of filling the gap where most in-memory caches, or key/value stores, are too complex or require either an install or some sort of tedious configuration before you can start coding.

LocalStorage for .NET is inspired by – but totally unrelated to – Javascript’s localStorage; a temporary caching lib for low-demanding data needs.


If you want to get started fast, simply install the library through NuGet:

$ Install-Package LocalStorage

I’ve made some effort to describe the how-and-what in detail in the README of the project, along with a bunch of examples, but let me oversimplify its use with the following sample:

// initialize a new instance
// (see the README for more configurable options)
using (var storage = new LocalStorage()) 
    // store any object
    var weapon = new Weapon("Lightsaber");
    storage.Save("weapon", weapon);

    // ... and retrieve the object back
    var target = storage.Get<Weapon>("weapon");

    // or store + get a collection
    var villains = new string[] { "Kylo Ren", "Saruman", "Draco Malfoy" };
    storage.Save("villains", villains);

    // ... and get it back as an IEnumerable
    var target = storage.Query<string>("villains");

    // finally, persist the stored objects to disk (.localstorage file)

Again, take a look at the GitHub repo for more in-depth information:

LocalStorage for .NET: it’s dead simple. It has no dependencies. It’s lightweight. And it’s got a memorable name, so hopefully you’ll consider giving it a go yourself someday.


Ignore changes to an existing file in a git repo

It frequently happens to me that I want to ignore changes to an existing file in a git repo. Or put otherwise, don’t track changes for a specific file.

Obviously you can use a strategy where you would commit a file with a suffix like config.rb.example or web.config.dist. And .gitignore the actual ones. But this is suited ideally for config files which only requires a onetime setup. Personally, I find it quite convenient to ignore changes to be able to toggle tracking changes to a specific file. For this purpose you might want to pull up your sleeves for the following git commands.

Ignoring all changes to a specific file:

git update-index --assume-unchanged <path_to_file>

And this is easily reverted, where you can use the no-assume-unchanged command to enable tracking of changes again:

git update-index --no-assume-unchanged <path_to_file>

Perfectly sane. But these changes will live on throughout the lifespan of your current branch. And it might be likely that someday you’ll forget what files exactly were ignored in your repo. When this happens you can use the following command to list all changed, untracked files:

git ls-files -v | grep '^[[:lower:]]'

A final important notice is that when you have made changes to a file that is untracked and you decide to switch branches, you might run into the following error:

error: Your local changes to the following files would be overwritten by checkout: … Please, commit your changes or stash them before you can switch branches.

The error message git spits out is quite self explanatory; you have to decide either to commit or discard the changes before switching branches.


Enabling gzip compression in a dotnet core webapi

A fine new addition to the ASP.NET Core 1.1, released medio November 2016, is the option to quickly configure and enable Gzip compression in your dotnet core application.

The recipe for enabling Gzip compression globally is quite easy, as the following two steps illustrate:

  1. Add the package Microsoft.AspNetCore.ResponseCompression
  2. Configure the middleware in your Startup.cs, as shown below:

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    public void ConfigureServices(IServiceCollection services)
        // Configure Compression level
        services.Configure<GzipCompressionProviderOptions>(options => options.Level = CompressionLevel.Fastest);
        // Add Response compression services
        services.AddResponseCompression(options =>

There is also a video by the Visual Studio team that illustrates this as well. However, while this sounds like a solid deal at first sight you should be aware that it comes with a lot of dependencies.

For example, by installing the Microsoft.AspNetCore.ResponseCompression package through VisualStudio you are prompted with a long list of dependencies:

On itself having these dependencies does not really matter. Especially in a dotnet core application, where dependencies are nested and sort of isolated from other references. But awareness matters and this is definitely something to be aware about.


Error: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Certificates. It’s always a pain.

So when you’re attempting to access a remote webservice, regardless if it’s SOAP or REST, and your browser already gives you the headsup:


… you know the pain is coming to a theater near you very soon. Invalid SSL certificates will hurt you – the developer.

Take for example a simple C# WebRequest in consideration. By default, the ServicePointManager plays it safe and hooks into all your requests. If the SSL is invalid, your WebRequest will fail with the following exception:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Fixing it the proper way is easy: make sure you’re talking to a valid SSL certificate. But, when you’re reading this that is probably not an option. In this case there is a workaround, albeit a bit risky one. Simply tell the (static) ServicePointManager to skip all checks on remote certificates for you application, as so:

// tell the ServicePointManager to skip all SSL checks
// ... at your own risk
ServicePointManager.ServerCertificateValidationCallback = new
   delegate { return true; }

And do note the “at your own risk” part. Although you probably realise what this does and what kind of security implications this . By overriding the default checks with the snippet above you’re making sure

It also raises the question: why would anyone allow you to do this? Are there any valid reasons on why you would like to skip this? Sure. Give it a bit of imagination and think for example a unit, or integration, test with this purpose where you deliberately want to test what happens on, for example, an internal self-signed SSL certificate. In the end, the option is available. And it smells. But from a pragmatic point of view, this continues the show.


Aurelia error: TypeError – customConfig.configure is not a function

When manually adding a custom bootstrap class – the main.js entrypoint if you will – I suddenly stumbled upon the following error message in the console:

TypeError: customConfig.configure is not a function(…)

I found the resolution in a (closed) issue on GitHub, where it is stated that it is not required to specify the aurelia-app attribute, if you use a configuration to set the main entry point for your Aurelia application.

In the aforementioned GitHub issue it is pointed out specifically when you use WebPack, but even with the default starter pack (which uses config.js) this occurs as well.

It’s quite easy to fix. Simply change the following:

<body aurelia-app="main">

… into this simpler signature where you remove the explicit entrypoint “main”:

<body aurelia-app>

And you’re good to go. This of course assumes you have a specific configuration with your web bundler, like for example the config.js from the starter kit, where it should state what exactly is your app’s entry point:



Error: zsh command not found

I encountered this error message on one specific macOS system, where not all NPM packages that are installed globally (and successfully I might add) were available from the command line. In my case, a simple npm install -g jspm, to install the JSPM’s CLI, would result in the following error message:

➜ ~ jspm -v zsh: command not found: jspm

My first thought was to “blame” JSPM. Especially considering other NPM packages were available just fine from the command line. After some digging around it seemed related to something in my PATH variable.

By default, on macOS and when you’ve installed NPM using Homebrew, the default path should be /usr/local. To verify this, simply use:

$ npm config get prefix

If the output doesn’t match the earlier mentioned /usr/local path, like in my case where it was set to /Users/jhanssens/.npm-global/lib, simply change it accordingly:

$ npm config set prefix /usr/local

Hiding .js and .map files in Visual Studio Code, when in a TypeScript project

When you are working with TypeScript in Visual Studio Code, you often don’t want to see generated JavaScript files and source maps in the explorer or in search results. I mean, this can get messy:


Look at all those bloated .js and .js.map files which clutter my otherwise perfectly clean file explorer. Yuck!

Luckily, there is a way to hide derived Javascript files in VS Code out of the box. Using a filter expression, we can use the files.exclude setting to hide these derived files.

Simply start by navigating to: Code > Preferences > Workspace Settings

Next, in the right pane you’ll see the settings.json override file, where you can add the following:

// place your settings in this file to overwrite default and user settings.
    "files.exclude": {
        // include the defaults from VS Code
        "**/.git": true,
        "**/.DS_Store": true,

        // exclude .js and .js.map files, when in a TypeScript project
        "**/*.js": { "when": "$(basename).ts"},
        "**/*.js.map": true

This will match on any JavaScript file (**/*.js), but only if a sibling TypeScript file with the same name is present. Of course the exclude of *.js.map will hide all appropriate map files. With a result that the file explorer will no longer show derived resources for JavaScript if they are compiled to the same location.


“Missing a temporary folder” error in the WordPress Media Gallery

After an upgrading to WordPress 4.4.x, I was suddenly getting the following error when trying to upload an image through the Media Gallery:

Missing a temporary folder.

Pretty self explanatory, as I would image the update would have something to do with a missing file path or permissions. It’s always a permissions problem, isn’t it? So, first off starting point with anything related to this is to reference the Codex where I found the get_temp_dir() function, stating:

Determine a writable directory for temporary files. Function’s preference is the return value of sys_get_temp_dir(), followed by your PHP temporary upload directory, followed by WP_CONTENT_DIR, before finally defaulting to /tmp/ In the event that this function does not find a writable location, It may be overridden by the WP_TEMP_DIR constant in your wp-config.php file.

Decent enough. It checks the system’s temp dir first, then the PHP override and if neither are available you may override this using the ‘WP_TEMP_DIR’ constant. Overriding sounds good, so let’s start with the latter to see if that would provide the quick fix. Let’s add the mentioned constant to the wp.config.php file:

define( 'WP_TEMP_DIR', dirname(__FILE__) . '/wp-content/temp');
/* That's all, stop editing! Happy blogging. */

Now normally if the system’s temp directory wasn’t writable, explicitly defining a temp directory for WordPress like this should have fixed it. It didn’t, though. Because of the continuing error, I thought adding a check to see what the file paths are, if they even exist and if they’re writable would be the best course of action. Simply adding the following to a content page would surely give me the appropriate info:

        sys_get_temp_dir: <?php echo sys_get_temp_dir() . 
            ', exists: ' . file_exists(sys_get_temp_dir()) . 
            ', writable: ' . is_writable(sys_get_temp_dir()); ?>
        get_temp_dir: <?php echo get_temp_dir() . 
            ', exists: ' . file_exists(get_temp_dir()) . 
            ', writable: ' . is_writable(get_temp_dir()); ?>

Which spat out:

  • sys_get_temp_dir: /tmp, exists: 1, writable: 1
  • get_temp_dir: /var/…longpath…/wp-content/temp/, exists: 1, writable: 1

Strange. Both the system temp path as well the WordPress override exist AND are writable. What gives? Summarizing a long story further on, with lots of chmod’s and vague permissions checks, the answer was provided in this post, by Gilles:

The normal settings for /tmp are 1777, which ls shows as drwxrwxrwt. That is: wide open, except that only the owner of a file can remove it (that’s what this extra t bit means for a directory).

The problem with a /tmp with mode 777 is that another user could remove a file that you’ve created and substitute the content of their choice.

If your /tmp is a tmpfs filesystem, a reboot will restore everything. Otherwise, run chmod 1777 /tmp.

Thinking to myself “Surely you can’t be serious that a reboot would fix this?”, or even be required to fix it? But yes, rebooting the server DID fix it. And again, problems with permissions on *NIX systems were the cause of another hour or two of me having some serious frustration.


Redirect http to https for a domain in Plesk 12.5

Update Apr, 2018: since Plesk v17.5 and higher, adding a custom http directive as mentioned in the article below isn’t required anymore. Now you can simply go to your domain in Plesk > Hosting Settings > and select the option Permanent SEO-safe 301 redirect from HTTP to HTTPS. Thank you, Plesk!

It’s a good evolution that it keeps on getting easier to add a SSL certificate to a domain. And I would certainly encourage to do so, if you have the chance. For example, when you’re using Parallel’s/Odin’s Plesk as a hosting environment it is really easy to import a SSL certificate for a specific domain.

But the thing that is not so straightforward is forcing a domain to only run https.

This is because, remarkably, there is no native setting or magic checkbox you can select to make this happen. In fact, there is currently no other way then to add a custom directive to the .htaccess or vshost configuration of the domain. So, let’s clear things up and simply make it easy, using the following instructions (applicable to the Linux variant of Plesk 12.5 only):

First, login to your Plesk instance and navigate to the appropriate domain settings. In here, select the Apache & nginx settings, which is a new option in Plesk 12.5.


On the settings page you’ll find the options to add Additional Directives for both http and https. The only thing we’re interested right now is simply redirecting all traffic from http to https. For this, we’ll simply add a rewrite rule to the http directive:
RewriteEngine on
RewriteCond %{HTTPS} !=on
RewriteRule ^(.*)$ https://%{HTTP_HOST}/$1 [R,QSA]

Which will look like:


Hit the Apply button and all your traffic will be automatically rewritten to use https.


Best Practices of Version Control

Always use version control. Always… There really is no excuse why you shouldn’t use some form of version control. Not even when you’re a one man army.

I prefer git myself, but all versioning systems have their own advantages, depending on your specific need and competences of your team members. The list below is a uniform cheatsheet for best practices on version control, and was originally setup by the people from Tower.

1. Commit Related Changes

A commit should be a wrapper for related changes. For example, fixing two different bugs should produce two separate commits. Small commits make it easier for other developers to understand the changes and roll them back if something went wrong. With tools like the staging area and the ability to stage only parts of a file, Git makes it easy to create very granular commits.  

2. Commit Often

Committing often keeps your commits small and, again, helps you commit only related changes. Moreover, it allows you to share your code more frequently with others. Sharing is a good thing. That way it’s easier for everyone to integrate changes regularly and avoid having merge conflicts. Having few large commits and sharing them rarely, in contrast, makes it hard to solve conflicts.  

3. Don’t Commit Half-Done Work

You don’t leave the toilet until the job is done. Neither should you commit code until it’s completed. This doesn’t mean you have to complete a whole, large feature before committing. Quite the contrary: split the feature’s implementation into logical chunks and remember to commit early and often. But don’t commit just to have something in the repository before leaving the office at the end of the day. If you’re tempted to commit just because you need a clean working copy (to check out a branch, pull in changes, etc.) consider using Git’s “Stash” feature instead.  

4. Test Before You Commit

Resist the temptation to commit something that you “think” is completed. Test it thoroughly to make sure it really is completed and has no side effects (as far as one can tell). I always ask myself: “Will it blend?” While committing half-baked things in your local repository only requires you to forgive yourself, having your code tested is even more important when it comes to pushing / sharing your code with others. Really, test it – then test it again!

5. Write Good Commit Messages

Begin your message with a short summary of your changes (up to 50 characters as a guideline). Separate it from the following body by including a blank line. The body of your message should provide detailed answers to the following questions: What was the motivation for the change? How does it differ from the previous implementation? Use the imperative, present tense (“change”, not “changed” or “changes”) to be consistent with generated messages from commands like git merge.

6. Version Control is not a Backup System

Having your files backed up on a remote server is a nice side effect of having a version control system. But you should not use your VCS like it was a backup system. When doing version control, you should pay attention to committing semantically (see “related changes”) – you shouldn’t just cram in files.  

7. Use Branches

Branching is one of Git’s most powerful features – and this is not by accident: quick and easy branching was a central requirement from day one. Branches are the perfect tool to help you avoid mixing up different lines of development. You should use branches extensively in your development workflows: for new features, bug fixes, ideas…  

8. Agree on a Workflow

Git lets you pick from a lot of different workflows: long-running branches, topic branches, merge or rebase, git-flow… Which one you choose depends on a couple of factors: your project, your overall development and deployment workflows and (maybe most importantly) on your and your teammates’ personal preferences. However you choose to work, just make sure to agree on a common workflow that everyone follows.

Credits and much more elaborate tips and trics for this article go to Git Tower.