Yearly Archives: 2016


Ignore changes to an existing file in a git repo

It frequently happens to me that I want to ignore changes to an existing file in a git repo. Or put otherwise, don’t track changes for a specific file.

Obviously you can use a strategy where you would commit a file with a suffix like config.rb.example or web.config.dist. And .gitignore the actual ones. But this is suited ideally for config files which only requires a onetime setup. Personally, I find it quite convenient to ignore changes to be able to toggle tracking changes to a specific file. For this purpose you might want to pull up your sleeves for the following git commands.

Ignoring all changes to a specific file:

git update-index --assume-unchanged <path_to_file>

And this is easily reverted, where you can use the no-assume-unchanged command to enable tracking of changes again:

git update-index --no-assume-unchanged <path_to_file>

Perfectly sane. But these changes will live on throughout the lifespan of your current branch. And it might be likely that someday you’ll forget what files exactly were ignored in your repo. When this happens you can use the following command to list all changed, untracked files:

git ls-files -v | grep '^[[:lower:]]'

A final important notice is that when you have made changes to a file that is untracked and you decide to switch branches, you might run into the following error:

error: Your local changes to the following files would be overwritten by checkout: … Please, commit your changes or stash them before you can switch branches.

The error message git spits out is quite self explanatory; you have to decide either to commit or discard the changes before switching branches.


Enabling gzip compression in a dotnet core webapi

A fine new addition to the ASP.NET Core 1.1, released medio November 2016, is the option to quickly configure and enable Gzip compression in your dotnet core application.

The recipe for enabling Gzip compression globally is quite easy, as the following two steps illustrate:

  1. Add the package Microsoft.AspNetCore.ResponseCompression
  2. Configure the middleware in your Startup.cs, as shown below:

    public void Configure(IApplicationBuilder app, IHostingEnvironment env, ILoggerFactory loggerFactory)
    public void ConfigureServices(IServiceCollection services)
        // Configure Compression level
        services.Configure<GzipCompressionProviderOptions>(options => options.Level = CompressionLevel.Fastest);
        // Add Response compression services
        services.AddResponseCompression(options =>

There is also a video by the Visual Studio team that illustrates this as well. However, while this sounds like a solid deal at first sight you should be aware that it comes with a lot of dependencies.

For example, by installing the Microsoft.AspNetCore.ResponseCompression package through VisualStudio you are prompted with a long list of dependencies:

On itself having these dependencies does not really matter. Especially in a dotnet core application, where dependencies are nested and sort of isolated from other references. But awareness matters and this is definitely something to be aware about.


Error: The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Certificates. It’s always a pain.

So when you’re attempting to access a remote webservice, regardless if it’s SOAP or REST, and your browser already gives you the headsup:


… you know the pain is coming to a theater near you very soon. Invalid SSL certificates will hurt you – the developer.

Take for example a simple C# WebRequest in consideration. By default, the ServicePointManager plays it safe and hooks into all your requests. If the SSL is invalid, your WebRequest will fail with the following exception:

The underlying connection was closed: Could not establish trust relationship for the SSL/TLS secure channel.

Fixing it the proper way is easy: make sure you’re talking to a valid SSL certificate. But, when you’re reading this that is probably not an option. In this case there is a workaround, albeit a bit risky one. Simply tell the (static) ServicePointManager to skip all checks on remote certificates for you application, as so:

// tell the ServicePointManager to skip all SSL checks
// ... at your own risk
ServicePointManager.ServerCertificateValidationCallback = new
   delegate { return true; }

And do note the “at your own risk” part. Although you probably realise what this does and what kind of security implications this . By overriding the default checks with the snippet above you’re making sure

It also raises the question: why would anyone allow you to do this? Are there any valid reasons on why you would like to skip this? Sure. Give it a bit of imagination and think for example a unit, or integration, test with this purpose where you deliberately want to test what happens on, for example, an internal self-signed SSL certificate. In the end, the option is available. And it smells. But from a pragmatic point of view, this continues the show.


Aurelia error: TypeError – customConfig.configure is not a function

When manually adding a custom bootstrap class – the main.js entrypoint if you will – I suddenly stumbled upon the following error message in the console:

TypeError: customConfig.configure is not a function(…)

I found the resolution in a (closed) issue on GitHub, where it is stated that it is not required to specify the aurelia-app attribute, if you use a configuration to set the main entry point for your Aurelia application.

In the aforementioned GitHub issue it is pointed out specifically when you use WebPack, but even with the default starter pack (which uses config.js) this occurs as well.

It’s quite easy to fix. Simply change the following:

<body aurelia-app="main">

… into this simpler signature where you remove the explicit entrypoint “main”:

<body aurelia-app>

And you’re good to go. This of course assumes you have a specific configuration with your web bundler, like for example the config.js from the starter kit, where it should state what exactly is your app’s entry point:



Error: zsh command not found

I encountered this error message on one specific macOS system, where not all NPM packages that are installed globally (and successfully I might add) were available from the command line. In my case, a simple npm install -g jspm, to install the JSPM’s CLI, would result in the following error message:

➜ ~ jspm -v zsh: command not found: jspm

My first thought was to “blame” JSPM. Especially considering other NPM packages were available just fine from the command line. After some digging around it seemed related to something in my PATH variable.

By default, on macOS and when you’ve installed NPM using Homebrew, the default path should be /usr/local. To verify this, simply use:

$ npm config get prefix

If the output doesn’t match the earlier mentioned /usr/local path, like in my case where it was set to /Users/jhanssens/.npm-global/lib, simply change it accordingly:

$ npm config set prefix /usr/local

Hiding .js and .map files in Visual Studio Code, when in a TypeScript project

When you are working with TypeScript in Visual Studio Code, you often don’t want to see generated JavaScript files and source maps in the explorer or in search results. I mean, this can get messy:


Look at all those bloated .js and files which clutter my otherwise perfectly clean file explorer. Yuck!

Luckily, there is a way to hide derived Javascript files in VS Code out of the box. Using a filter expression, we can use the files.exclude setting to hide these derived files.

Simply start by navigating to: Code > Preferences > Workspace Settings

Next, in the right pane you’ll see the settings.json override file, where you can add the following:

// place your settings in this file to overwrite default and user settings.
    "files.exclude": {
        // include the defaults from VS Code
        "**/.git": true,
        "**/.DS_Store": true,

        // exclude .js and files, when in a TypeScript project
        "**/*.js": { "when": "$(basename).ts"},
        "**/*": true

This will match on any JavaScript file (**/*.js), but only if a sibling TypeScript file with the same name is present. Of course the exclude of * will hide all appropriate map files. With a result that the file explorer will no longer show derived resources for JavaScript if they are compiled to the same location.


“Missing a temporary folder” error in the WordPress Media Gallery

After an upgrading to WordPress 4.4.x, I was suddenly getting the following error when trying to upload an image through the Media Gallery:

Missing a temporary folder.

Pretty self explanatory, as I would image the update would have something to do with a missing file path or permissions. It’s always a permissions problem, isn’t it? So, first off starting point with anything related to this is to reference the Codex where I found the get_temp_dir() function, stating:

Determine a writable directory for temporary files. Function’s preference is the return value of sys_get_temp_dir(), followed by your PHP temporary upload directory, followed by WP_CONTENT_DIR, before finally defaulting to /tmp/ In the event that this function does not find a writable location, It may be overridden by the WP_TEMP_DIR constant in your wp-config.php file.

Decent enough. It checks the system’s temp dir first, then the PHP override and if neither are available you may override this using the ‘WP_TEMP_DIR’ constant. Overriding sounds good, so let’s start with the latter to see if that would provide the quick fix. Let’s add the mentioned constant to the wp.config.php file:

define( 'WP_TEMP_DIR', dirname(__FILE__) . '/wp-content/temp');
/* That's all, stop editing! Happy blogging. */

Now normally if the system’s temp directory wasn’t writable, explicitly defining a temp directory for WordPress like this should have fixed it. It didn’t, though. Because of the continuing error, I thought adding a check to see what the file paths are, if they even exist and if they’re writable would be the best course of action. Simply adding the following to a content page would surely give me the appropriate info:

        sys_get_temp_dir: <?php echo sys_get_temp_dir() . 
            ', exists: ' . file_exists(sys_get_temp_dir()) . 
            ', writable: ' . is_writable(sys_get_temp_dir()); ?>
        get_temp_dir: <?php echo get_temp_dir() . 
            ', exists: ' . file_exists(get_temp_dir()) . 
            ', writable: ' . is_writable(get_temp_dir()); ?>

Which spat out:

  • sys_get_temp_dir: /tmp, exists: 1, writable: 1
  • get_temp_dir: /var/…longpath…/wp-content/temp/, exists: 1, writable: 1

Strange. Both the system temp path as well the WordPress override exist AND are writable. What gives? Summarizing a long story further on, with lots of chmod’s and vague permissions checks, the answer was provided in this post, by Gilles:

The normal settings for /tmp are 1777, which ls shows as drwxrwxrwt. That is: wide open, except that only the owner of a file can remove it (that’s what this extra t bit means for a directory).

The problem with a /tmp with mode 777 is that another user could remove a file that you’ve created and substitute the content of their choice.

If your /tmp is a tmpfs filesystem, a reboot will restore everything. Otherwise, run chmod 1777 /tmp.

Thinking to myself “Surely you can’t be serious that a reboot would fix this?”, or even be required to fix it? But yes, rebooting the server DID fix it. And again, problems with permissions on *NIX systems were the cause of another hour or two of me having some serious frustration.