System.UnauthorizedAccessException – Access to the path ‘C:\Windows\Microsoft.NET\Framework\ v4.0.30319\Temporary ASP.NET Files\xxx.tmp’ is denied

TL,DR:

Check “Enable Just my code” in Visual studio settings.

Longer explanation:

It happened to my a few times (either after installing for plugins or just out of nowhere), that this settings got changed. There was really no logic whatsoever on getting all those unknown exceptions on top of a code that was never throwing any.

The problem here then is that any piece of code crashing (mostly internal framework errors that you don’t care about) surfaces and pops up every second when debugging your code.

As noted above, disable it and go back to your code in confidence 🙂

Azure devOps – Publish your Angular 5-6-7 front-end to an Azure App Service

In this article, we will follow all the steps you need t be able to publish an Angular app onto an Azure App Service using Azure DevOps. Let’s start shall we?

1. Getting all npm modules prior to compile

Starting with a new build definition, go & add a first npm task to install all modules required by your app.

  • Don’t forget to set the proper working folder for npm to be able to find your package.json file,
  • Set the Command dropdown list value to “install”.

2. Compile your Angular code using the Angular CLI

My npm scripts in my package.json looks like that:

"scripts": {
  "build:prod:aot": "rimraf dist dll && ng build --prod --optimization --build-optimizer --aot --progress --bail",
}

In this case, I will create a new Command line task that will run this command after installing modules:

  • The “Script” field value will be set to “npm run build:prod:aot” (as yout would manually run it locally).
  • Don’t forget to again set the Working directory for the process to run nicely (listed in the Advanced section on the task screenshot below).

3. Push built artifacts

Once build is done, you can pick up your files from the “dist” folder, and push them as an artifact.

  • We will pick our files from “$(Build.SourcesDirectory)\np-material\material\dist”, and push them into an artifact.
  • The artifact requires a name, that will be used later.
  • Leave the Publish location to the default (Azure Pipelines/TFS)

4. Publish your app

Now that your app has been built it is time to deploy it.

Having your artifact listed here is the first steps, then let’s look into the stage to publish it:

  • Select your subscription, app type and App service name,
  • Specify the path to get the data to publish to your app:
    • Notice that the path contains the name of your previously named artifact,
    • Also note that this is a folder, indicating all of what is in there will be pushed to your App Service web root.

Now you app is published, happy coding!

Technical architecture: an example of matchmaking of old and new stuff

Greenfield projects are not legions. When they do occur, this is where you can leverage the best of the breed of tech.

But what if we can mix old and new to obtain something great?

I want to share here a project context I have been involved into in 2014, and the architecture that worked together with previous technologies.

1. Context:

First, let’s start by looking to the existing project landscape (started in 2005).

Tech:

  • SQL Server 2008
  • Merge replication with mobile devices (SQL Mobile 2005 CE)
  • Most application logic stored into triggers

Why the need for change:

  • Merge replication relies on SQL triggers to detect entity changes,
  • Merge replication implies that data sync is directly done to the data store, no way to run in-between business logic but to put it into Triggers,
  • Merge replication has a heavy impact on SQL Server performance, considering the 2 previous points (client entity merges and very complex business triggers)
  • End of life for most PDAs used to sync & end of support for Window Mobile 2005.
  • Devices syncing with this mechanism will be decommissioned market by market over a 3 years period

What the new solution needs to provide:

  • has to provide lightning fast sync to new users with new technologies,
  • has to sync data with old SQL systems without bringing additional performance bottlenecks
  • SQL triggers have to stay to allow old devices using merge replication (thousands of them) to work in conjunction of the new stack, to allow a market-by-market devices replacement.

2. Solution:

The technical\ solution highlighted in the diagram above is the following:

Looking at data sync:

  • Using an independent data store, containing entities to push to devices in order to do their work,
  • To avoid performance impact on the main SQL instance, a Read only mirror was built,
  • Using a deported data store avoids a back-pressure effect on the SQL back-end when all devices wants to sync (most people using Merge Replication were all syncing by the end of the day),
  • Use of dedicated services taking care of data sync from the SQL data to the data store, devices on the other hand were only interacting with Web API methodss using REST.
  • Data to be pushed to devices was retrieved from the main SQL databases on which SQL Merge replication was in place once every 10 minutes (using view and store procedures from the existing application),
  • Data queried and placed in the data store was de-normalized as much as possible to simplify sync,

Looking at scaling:

  • Data store were based on MongoDB replica sets functionality, to allow scaling as more markets will use the new platform,
  • Scaling was  also considered when using Azure App Services, to allow more instances to come into play with more markets,

Looking at updating data:

  • Data store was split in to data to be pushed and data to be updated,
  • Data sent by devices for update where queued into staging documents sets,
  • Data was integrated by jobs back into he main app SQL instance; if any performance issue was causing updates to fail, they would be kept &merged later,
  • Updating data was using existing triggers and views, keeping compatibility with old devices; this allowed updating data for the sync process to get updates and push it back to all devices.

Bottom line:

Keeping old application building blocks is not always easy, but most of r the times you don’t really have a choice on doing so; still you can still make cool things.

I this case, the platform was then able to:

  • Sustain sync for about 800 simultaneous devices per App Service, contrasting with around 50 PDAs with the existing stack.
  • Offer sync that went down to 30 seconds for the first one (and less than 200ms for incremental oncs), instead of 25 minute syncs for the Merge replication (could go up to 45 minutes on peak usage)

Indexing for objects in your code – Not only a data storage thing

 

As you might already know, reader, if you hit this page, is that Indexing is and will always be:

  • Indexing comes from a simple need, for dev related things or not.
  • It helps getting to the data you want FASTER.

“I want to find all people with their first name starting with S”, I can sort that out with that folder sorting.

When it comes to a data storage things are the same:

  • File systems indexing the position of files on the actual hard drive and flash drive,
  • databases can allows you to create multiple indexes on certain columns, to make sure you can sort it faster.

Why making the point for code then? Well, it comes down to the exact same problem: performance.

Most of the times, people using LINQ, to actually parse data and get subsets of it for processing, in a loop for example:

foreach (var countryCode in _countryCodes)
{
    var countriesPerCode = _entitiesLists.Where(e => e.Country == countryCode).ToList();
    var count = countriesPerCode.Count;
    // Code supposingly doing something with entities.
}

It can be fine if not too much of these loops are run, but why is it so different when it runs on let’s say a million rows?

Same problem occurs with a database: if the sql enginge running the query doesn’t know anything about how to get rows that match your WHERE clause, it will have to run on all rows.

In the case of our loop above, LINQ does the same: how LINQ would know about which items match you lambda function until it tried it on all items in your list?

To solve that issue we are going to use an uncommon LINQ object: Lookup.

The goal is simple: we are going to use it to build out an index out of our data to group it by a given key. Running this only once on our dataset fixes our problem, in the sense that getting back data subset for each loop iteration will be instant with our Lookup.

Here are the performance difference you can get from our test app (output from our console app):

Data: building
Data: done
Test1: start
Test1: processed in 535 milliseconds
Test2: start
Test2: lookup done in 140 milliseconds
Test2: processed in 141 seconds

To summarize our little article (TL,DR):

  • The cost of initializing the dictionary first takes some time, but doing it once offers around a x4 performance gain
  • Make use of indexing capabilities when you dataset starts to be above a certain number of items (that is to say most of the times!)
  • This is a really simple sample that does not reflect other things that could happen around your code, as I have seen performance getting from 45 minutes to 5 (a x9 increase then)
  • Repro code is to be found in Github here.

 

Multi tenancy with Azure – Guide

Dealing with client data is quite important, especially when GDPR is coming 🙂

Still, building apps for multiple clients is and has always been a complicated task for multiple valid reasons:

  • Monetization: consumer vs company data segmentation
  • Resources Cost: data split strategy vs costs
  • Non-technical: legal or data protection for country or unions (e.g. European Union)

Lets go through options we can have that actually works with pros and cons, for each of those concerns.

Continue reading Multi tenancy with Azure – Guide

Entity Framework – Code First Migration – Solving merge errors

Using Entity Framework with git and having multiple branches updating a model can be quite challenging.

Here is a concrete example:

  • develop branch is on code first migration v1
  • feature X updates model to add fields to an entity, adding migration v2
  • feature Y also updates model  to add fields on an entity, adding migration v3
  • feature X is merged into develop
  • feature Y is merged, coming on top of previous one

Remarks:

First, both feature migrations are independent and both built on top of v1:

EF will be able to process them, but what will happen is this:

Changes noted here as incoming will bring back migration v2 changes when running “Add-Migration” again, even if they are present.

It basically means EF does not see v2 because v3 was generated with v2 not being present in the branch at the time of adding a migration.

This has to do with data found in migration RESX file, which contains Base 64 binary encoded info to keep track of link between each.

Bottom line: to avoid having this to happen:

  1. Rebuild migration v3 on top of v2 in develop branch by deleting v3 after getting v2 from develop
  2. Build a model update feature that everyone merge into their branch one updated with their model change
  3. merge feature X into feature Y only for db change, then add your fields in feature Y so that new fields rely on migration v2

Happy merging!

 

Connecting to CosmosDB with Microsoft Azure Storage Explorer now

You’ve probably noticed the CosmosDB announcement a couple of weeks ago, and this a great step to get secondary index for Table Storage-like data you are using today ins Azure.

I am relying quite often on Microsoft Azure Storage Explorer to access my tables date, but the Cosmos DB part is done done yet, so howto you do that?

HOW TO:

Because Cosmos DB has now a Table API that behaves exactly the same as a Storage Table, just:

  • open your “Local and Attached” top root navigation node in Explorer
  • right-click “Storage Accounts”, add select “Connect to Azure Storage”
  • Select “Use a connection string or a shared access signature URI” and follow the rest of the process to add your Cosmos DB table and use it as a Storage table!

This is a work-around to play with your CosmosDB data in a simple way, without having to wait.

Still, CosmosDB does not work the same way as the traditional Table Storage, especially on import/export on large volume of data: where Table storage is limiting the query performance, CosmosDB just cut connection straight away.

Happy indexing!