1. On my previous posts I described how to use Docker to create an image that can be used for development. On this post I will briefly talk about docker compose. If developers in your team are not too familiar with the docker CLI and your application gets more complex, it is useful to encapsulate all the parameters used to building and running the container(s) in a docker-compose file. Let's take a look at the file used to launch the development environment:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    version: "3"
    services:
      web:
        build:
          context: .
          dockerfile: Dockerfile.dev
        ports:
          - "5000:80"
        volumes:
          - .:/code/app

    Notice that this file declares which Dockerfile to use when building and also maps the ports AND sets up the file volume. Now instead of remembering all those arguments to the 'docker run' command you can use the 'docker-compose' CLI that looks like this:

    1
    2
    3
    4
    5
    # builds the image
    docker-compose build
     
    # runs all the containers
    docker-compose up

    As we will see in a future blog post docker-compose can launch all the containers needed by your application in one short, easy to remember command.

    0

    Add a comment

  2. On my previous post I described how to use Docker to create an image that can be used for development. However, if you run this container you will find that modifying typescript files will not auto-reload the browser with your changes. The reason is that the WebPack's development server has problems with file notifications when running in a Docker container with volumes.

    To work around this, additional steps need to be taken, first in the Docker.dev file:

    1
    2
    3
    4
    5
    6
    7
    8
    FROM microsoft/aspnetcore-build:2.0.3
     
    # Required inside Docker, otherwise file-change events may not trigger
    ENV DOTNET_USE_POLLING_FILE_WATCHER 1
    ENV ASPNETCORE_ENVIRONMENT Development
    ENV DOCKER_ENVIRONMENT Development
     
    ...

    Notice the DOCKER_ENVIRONMENT variable. This is a custom variable that will be used in Startup.cs to configure WebPack's deveopment server to use polling to detect changes instead of the native OS file notifications. Here is the relevant section of Startup.cs:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    public class Startup
    {
        public void Configure(IApplicationBuilder app)
        {
            if (env.IsDevelopment())
            {
                var webpackOptions = new WebpackDevMiddlewareOptions();
                
                var dockerEnvironment = Environment.GetEnvironmentVariable("DOCKER_ENVIRONMENT");
                if (!String.IsNullOrEmpty(dockerEnvironment))
                {
                    webpackOptions.EnvironmentVariables = new Dictionary<string, string>() { { "DOCKER_ENVIRONMENT", dockerEnvironment } };
                }
     
                app.UseWebpackDevMiddleware(webpackOptions);
            }
        }
    }

    All that is happening here is that the environment variable is being read and added to the webPackOptions that the JavascriptServices will use when running the webpack development server (all this only if Development mode, of course).

    The last piece of the puzzle is in webpack.config.js, where the environment variable is being read, and if the application in running inside a Docker container will configure webpack to use polling to trigger a recompilation of the client side source.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    module.exports = (env) => {
        const isDevBuild = !(env && env.prod);
        const isDockerContainer = !isDevBuild && env && env.DOCKER_ENVIRONMENT;
     
        return [{
            resolve: {
                extensions: ['.ts', '.js'],
                modules: ['ClientApp', 'node_modules'],
            },
            output: {
                path: path.resolve(bundleOutputDir),
                publicPath: '/dist/',
                filename: '[name].js'
            },
            watch: !!isDockerContainer,
            watchOptions: {
                ignored: /node_modules/,
                aggregateTimeout: 300,
                poll: 1000
            },
        }];
    }

    Notice in line #15 that the 'watch' property is set depending on the DOCKER_ENVIRONMENT variable. Additionally, the 'watchOptions' property configures the file watcher to poll every second.

    With these changes in place, when you run the application inside the container and a typescript file is touched, after one second webpack will recompile the sources and you can reload the browser to see the changes.

    1

    View comments

  3. On my previous post I described how to automate a deployment to Heroku using CircleCI and Docker. I will switch gears now and write about how to set up a cross-platform development environment for the project. You see, turns out a friend of mine that I collaborated for this project could not even run the web application on his MacBook. For all that fanfare about .NET Core being cross-platform and whatnot, this project just would not run on his Mac (weird binding problems that he spent hours debugging).

    Luckily, we figured out how to use Docker to normalize on a development environment. Getting this to work required enough steps that it will take multiple blog posts to go over them. Here are the high level goals:

    1. Have a docker container with .net core installed run the application, but have the source point to the code in the host machine (we will use Docker volumes for that).
    2. Setup a file watcher so that every time the code is modified on the host machine, the process is restarted in the docker container (for javascript as well as for C# code).

    For now, let's start by looking at a new Docker.dev file that we use for development and go over the many hurdles to get this to work.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    FROM microsoft/aspnetcore-build:2.0.3
     
    # Required inside Docker, otherwise file-change events may not trigger
    ENV DOTNET_USE_POLLING_FILE_WATCHER 1
    ENV ASPNETCORE_ENVIRONMENT Development
    ENV DOCKER_ENVIRONMENT Development
     
    # Set a working dir at least 2 deep. The output and intermediate output folders will be /code/obj and /code/bin
    WORKDIR /code/app
     
    # By copying these into the image when building it, we don't have to re-run restore everytime we launch a new container
    COPY *.csproj .
    COPY NuGet.config .
    COPY Directory.Build.props .
    RUN dotnet restore
     
    # This will build and launch the server in a loop, restarting whenever a *.cs file changes
    ENTRYPOINT dotnet watch run --no-restore


    Setup the Image Binaries

    The first hurdle is that ASP.NET Core gets plenty confused when the bin and obj folders are shared between the docker container and the host machine. Instead of dealing with that, one solution is to keep them separate by using a Directory.Builds.props file. This file configures dotnet to output the bin and obj folders next to the project directory instead of inside it. So if your local project live in C:\dev\daycare, the binaries will get produced in C:\dev\daycare_bin, not ideal, but hey, the show must go on.

    Notice in line #14 that the Directory.Builds.props file is copied into the image before the solution is built. BUT, the real trick is line #9, where the working directory within the image is set to '/code/app'. This needs to be 2 folders deep so that the binaries are produced in /code/daycare_bin. Now the binaries inside and outside the docker container are separate and we can move on.


    Setup the Dotnet File Watcher

    There is an official Microsoft.DotNet.Watcher.Tools package that we can use to setup the watcher for .net code. Well it turns out the version that supports Docker is only available in preview. Starting from .NET Core 2.1 it has been rolled into the DotNetCLI, but for projects running on previous versions of .NET Core an extra file is needed to add the nuget feed that has the version 2.1.0-preview of this tool. You can see the contents of the Nuget.config file here and it needs to be copied into the image as you can see in line #13.

    Then, even when you get this version, it doesn't work out-of-the-box from within a Docker container that uses volumes, you need to configure it to use file polling instead of relying on the OS file notifications. This is done by setting the DOTNET_USE_POLLING_FILE_WATCHER environment variable to 1 as you can see in line #4.

    Finally, you can see in line #18 that the docker entry point now uses 'dotnet watch run' to kick off the file watcher.


    Run it!

    After you build the image with this new Docker file you can run it and map the port and the volume like this:

    1
    docker run -p 5000:80 -v .:/code/app

    Notice that the current directory is mapped to the folder /code/app inside the container. When the container runs it will use the binary from within the container but your sources for the file watcher and building. You can now navigate to localhost:5000 to load the application that is running in docker and if you modify any C# code in your machine you will notice that dotnet in docker will automatically restart.


    But wait, there is more! But it will have to wait until the next blog post :P. In the mean time you can see the website hosted in Heroku here or browse the sources at the project page.


    0

    Add a comment

  4. In my previous post I described how to manually deploy a docker image to the Heroku container registry, in this post I will take a look at how to automate this process by using a continuous integration service. The goal is that with every commit to the master branch, a CI server should build a new docker image and push it to Heroku.

    In this case I will use CircleCI, and after creating an account and linking it to the project's repo in bitbucket, you need to add a configuration YAML file under a folder named '.circleci' to your repo. What follows are the contents of the config.yml and a description of each section.

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    version: 2
    jobs:
     build:
       branches:
          only:
            - master
       machine: true
       steps:
         # checkout source code
         - checkout
     
        # build image
         - run: |
             docker info
             docker build -t daycare-app .
     
         # deploy the image
         - run: |
             docker login --username=$HEROKU_USERNAME --password=$HEROKU_API_KEY registry.heroku.com
             docker tag daycare-app registry.heroku.com/$HEROKU_APP_NAME/web
             docker push registry.heroku.com/$HEROKU_APP_NAME/web
     
          # release the image
         - run: |
             heroku -v
             heroku update
             heroku container:release web -a $HEROKU_APP_NAME
    • Line #6 tells CircleCI to only build the image for commits from the master branch.
    • At line #13 the meat of the build process starts, first running docker build and tagging the image. This will pickup the Dockerfile from the root directory that was described in a previous post.
    • Line #18 runs a series of commands to upload the image to Heroku. First, signs in to the heroku registry by using a username and API key from environment variables. I will go over how to define these shortly. Then, the image is pushed into the heroku registry using another environment variable for the application name as defined in Heroku.
    • Line #24 releases the image so that it is made public in Heroku. This turned out to be more of a hack, the only way I could figure out how to release the image was to use the Heroku CLI (my attempts to use a simple curl request failed). Luckily, CircleCI build machines already have the HerokuCLI installed, BUT it is an older version. So first the HerokuCLI is updated on the build machine and then we can call the release command.

    Now, regarding those environment variables, head over to your account settings page in heroku to generate an API KEY:

    heroku-api-key

    Then, in CircleCI open the build project settings and add the environment variables:

    circleci-variables

    That's all there is to it. Commit the config file to your repo, and watch as CircleCI starts the build and deploys the image to Heroku:

    circleci-build

    You can see the website hosted in Heroku here or browse the sources at the project page.

    0

    Add a comment

  5. On the previous posts I described how to build a production image for the web application. The next couple of posts will go over how to automate the deployment of the application to Heroku. First, let's deploy from the development machine using the Heroku CLI.

    1. Start by creating a new Heroku application:

    daycare-heroku

    2. Next, using the Heroku CLI, build and push the image to Heroku's container registry:

    1
    heroku container:push web -a YOUR_APP_NAME


    3. Finally, you can release the image:

    1
    heroku container:release web -a YOUR_APP_NAME


    Your application is now available in Heroku! For example https://daycare-sample.herokuapp.com. One question you may have is how does Heroku manage the mapping of the port to your container? The answer is that $PORT variable that was set on the Dockerfile (which defaults to 5000). Heroku will automatically assign a port number and makes it available to your application using the $PORT variable.

    Next up, how to automate the deployment using CircleCI.

    0

    Add a comment

  6. On my previous post I went over the Dockerfile to create the production image for the application. Before publishing it, let's test it locally using Docker for Windows. First, to create the image run:

    1
    docker build ./ -t daycare-app

    Docker automatically uses the 'Dockerfile' on the current directly to build the image and then it tags it with 'daycare-app'. You will see a lot of output with all the intermediary steps, but at the end you will have an image tagged 'daycare-app:latest' in your local image registry. Now, to run a container locally with this image:

    1
    docker run --rm -it -p 5000:5000 daycare-app

    Dissecting each of the parameters:

    • --rm: Automatically remove the container and clean up its file system when it exits.
    • -it: Mark the process as interactive, which creates a shell for the container and allows you to see the output of the process on the terminal.
    • -p: Maps the port 5000 of the host to the port 5000 of the container. Why 5000? Because if you remember in the Docker file the process starts with "CMD ASPNETCORE_URLS=http://*:$PORT dotnet Daycare.dll", which allows the consumer to provide the port with an environment variable. Note also that the Dockerfile defines a default value for the  PORT variable: "ENV PORT=5000".

    Once the container starts you can browse to http://localhost:5000 to load the application. Next up, deploy the image and host it on Heroku.

    0

    Add a comment

  7. Let’s now see how to publish the application to any cloud provider that supports Docker containers, like Heroku. To get started, I will go over the Dockerfile that will build the image that we can then host in Heroku:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    # build image
    FROM microsoft/aspnetcore-build:2.0.3 as build
    WORKDIR /app
     
    COPY *.csproj ./
    RUN dotnet restore
     
    COPY . ./
    RUN dotnet publish -c Release -o out
     
    # runtime image
    FROM microsoft/aspnetcore:2.0.3
    ENV PORT=5000
    WORKDIR /app
    COPY --from=build /app/out .
    CMD ASPNETCORE_URLS=http://*:$PORT dotnet Daycare.dll

    First, notice that the file uses multi-stage builds, which is a Docker feature that allows to create intermediate images that can be used as the source of the final image to build. In this case, the first stage will use a base image that has enough tools in it to build the application, the second stage will use the result of the first to load the application into a base image that just has the .NET Core runtime. If you are interested in a step by step description:

    • Line #2 get the aspnetcore-build image, which is published by Microsoft and not only does it have the .NET Core SDK installed, but also node.js.
    • Line #5 copies the .csproj file into the image and then runs dotnet restore to pull all .NET dependencies into the image. Why are only the .csproj files copied instead of all the source code? Becase Docker caches the result of every step, so as long as the csproj file hasn’t changed subsequent builds won’t have to keep running dotnet restore over and over.
    • Line #8 copies the rest of the source code and runs dotnet publish. Remember from my previous post that this command will download all node dependencies AND run webpack to create the production assets. The whole application will be placed in an ‘out’ folder.
    • Line #12 the second stage begins and gets the aspnetcore base image. This image has only the runtime bits and is smaller than the build image.
    • Line #15 copies the output from the first stage into this new image.
    • Line #16 starts the application in a special way. This has to do with how Heroku expects images to be built in order to host them:
      • First, you cannot use the ENTRYPOINT command, so you have to fall back to use the CMD command.
      • Second, Heroku exposes a random port for your container and makes it available on a $PORT variable. We need to pass this port down to kestrel, fortunately, ASP.NET Core reads an environment variable named ‘ASPNETCORE_URLS’ that can be used to configure kestrel. So the CMD command, first sets the environment variable using the $PORT set by Heroku and then launches the application using the dotnet command.

    Running the ‘docker build’ command will create the image that is ready for production. In the next post, I’ll go over how to test it and deploy it to the host. You can see the website hosted in Heroku here or browse the sources at the project page.

    0

    Add a comment

  8. Up to now, I have focused on the application that is auto-generated by the ASP.NET Core project template for Aurelia. I will take a detour and write about deployment in the next couple of posts. The idea is that as the application is developed, with each check-in we should have a public site that we can use to test.

    The easiest deployment mechanism for an ASP.NET Core application is Azure, just create a WebApp and push the code into the repo. Azure has the .NET SDK and Node.js installed so its machines can build the project and publish the output. How does this work? Azure detects that this is an .NET Core app and runs a specialized deploy.cmd, that first runs 'dotnet restore' to download the .NET dependencies and then runs ‘dotnet publish’ on the .csproj file that is in the root of the repo. To understand what happens at this stage, let’s take a look at the relevant section in daycare.csproj file:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    <Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
      <!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
      <Exec Command="npm install" />
      <Exec Command="node node_modules/webpack/bin/webpack.js --config webpack.config.vendor.js --env.prod" />
      <Exec Command="node node_modules/webpack/bin/webpack.js --env.prod" />
     
      <!-- Include the newly-built files in the publish output -->
      <ItemGroup>
        <DistFiles Include="wwwroot\dist\**" />
        <ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
          <RelativePath>%(DistFiles.Identity)</RelativePath>
          <CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
        </ResolvedFileToPublish>
      </ItemGroup>
    </Target>

    Notice that this target is triggered after the ‘ComputeFilesToPublish’ target, which is one of the default targets that are added by the .NET SDK (how you are supposed to know the names of these things is a mystery to me). Anyway the 'PublishRunWebpack' target first runs ‘npm install’ to download all node dependencies into the build machine, then it runs WebPack to create both the vendor and the application bundles that I described in the previous post. An important detail is that it passes the '--env.prod' parameter to WebPack so that the production bundles are built.

    Then, it grabs all the assets that WebPack created under 'wwwroot/dist/' and includes them as part of the publish output. This part of the MSBuild process is alien to me, I don't understand what that 'ResolvedFileToPublish' element does, but somehow this moves the client-side assets to a place where 'deploy.cmd' can pick them up to deploy them into the end machine that will host the application.

    The last step of deploy.cmd is to run something called KuduSync, which has smarts to only copy new and modified files from the output directory into the host machine. Below is a dump of the output of the deployment script:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    45
    46
    47
    48
    49
    50
    Command: "D:\home\site\deployments\tools\deploy.cmd"
    Handling ASP.NET Core Web Application deployment.
      Restore completed in 125.13 ms for D:\home\site\repository\daycare.csproj.
      Restore completed in 277.96 ms for D:\home\site\repository\daycare.csproj.
      Restore completed in 343.82 ms for D:\home\site\repository\daycare.csproj.
      Restore completed in 834.03 ms for D:\home\site\repository\daycare.csproj.
    Microsoft (R) Build Engine version 15.4.8.50001 for .NET Core
    Copyright (C) Microsoft Corporation. All rights reserved.
     
      daycare -> D:\home\site\repository\bin\Release\netcoreapp2.0\Daycare.dll
      npm WARN optional SKIPPING OPTIONAL DEPENDENCY: fsevents@^1.0.0 (node_modules\chokidar\node_modules\fsevents):
      npm WARN notsup SKIPPING OPTIONAL DEPENDENCY: Unsupported platform for fsevents@1.1.3: wanted {"os":"darwin","arch":"any"} (current: {"os":"win32","arch":"ia32"})
      npm WARN daycare@0.0.0 No repository field.
      npm WARN daycare@0.0.0 No license field.
      Hash: ded10ffca4fd10b012d0
      Version: webpack 2.7.0
      Child
          Hash: ded10ffca4fd10b012d0
          Time: 53637ms
                                         Asset     Size  Chunks                    Chunk Names
          674f50d287a8c48dc19ba404d20fe713.eot   166 kB          [emitted]        
          912ec66d7572ff821749319396470bde.svg   444 kB          [emitted]  [big] 
          b06871f281fee6b241d60582ae9369b9.ttf   166 kB          [emitted]        
          89889688147bd7575d6327160d64e760.svg   109 kB          [emitted]        
                                     vendor.js   491 kB       0  [emitted]  [big]  vendor
                                    vendor.css   560 kB       0  [emitted]  [big]  vendor
                            please-wait.min.js  5.62 kB          [emitted]        
                               please-wait.css   4.2 kB          [emitted]        
      Hash: 0a281103fae1aa51e985
      Version: webpack 2.7.0
      Child
          Hash: 0a281103fae1aa51e985
          Time: 44505ms
           Asset    Size  Chunks                    Chunk Names
          app.js  491 kB       0  [emitted]  [big]  app
      daycare -> D:\local\Temp\8d56a84476f060b\
    KuduSync.NET from: 'D:\local\Temp\8d56a84476f060b' to: 'D:\home\site\wwwroot'
    Copying file: 'Daycare.deps.json'
    Copying file: 'Daycare.runtimeconfig.json'
    Copying file: 'wwwroot\dist\674f50d287a8c48dc19ba404d20fe713.eot'
    Copying file: 'wwwroot\dist\89889688147bd7575d6327160d64e760.svg'
    Copying file: 'wwwroot\dist\912ec66d7572ff821749319396470bde.svg'
    Copying file: 'wwwroot\dist\app.js'
    Copying file: 'wwwroot\dist\b06871f281fee6b241d60582ae9369b9.ttf'
    Copying file: 'wwwroot\dist\please-wait.css'
    Copying file: 'wwwroot\dist\please-wait.min.js'
    Copying file: 'wwwroot\dist\vendor-manifest.json'
    Copying file: 'wwwroot\dist\vendor.css'
    Copying file: 'wwwroot\dist\vendor.js'
    Finished successfully.

    At the end of this process, when a request comes into the website the Azure WebApp service kicks off the process by running some version of 'dotnet run'. You can see the website hosted in Azure here or browse the sources at the project page.

    0

    Add a comment

  9. In the last post of this series I wrote about the webpack.config.vendor.js file that is auto-generated by the dotnet new template that configures how the vendor bundle is created. In this post I will look at the webpack.config.js file that creates the application bundle. The full contents are below, followed by an explanation of each major part:

    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    23
    24
    25
    26
    27
    28
    29
    30
    31
    32
    33
    34
    35
    36
    37
    38
    39
    40
    41
    42
    43
    44
    const path = require('path');
    const webpack = require('webpack');
    const { AureliaPlugin } = require('aurelia-webpack-plugin');
    const bundleOutputDir = './wwwroot/dist';
     
    module.exports = (env) => {
        const isDevBuild = !(env && env.prod);
        return [{
            stats: { modules: false },
            entry: { 'app': 'aurelia-bootstrapper' },
            resolve: {
                extensions: ['.ts', '.js'],
                modules: ['ClientApp', 'node_modules'],
            },
            output: {
                path: path.resolve(bundleOutputDir),
                publicPath: 'dist/',
                filename: '[name].js'
            },
            module: {
                rules: [
                    { test: /\.ts$/i, include: /ClientApp/, use: 'ts-loader?silent=true' },
                    { test: /\.html$/i, use: 'html-loader' },
                    { test: /\.css$/i, use: isDevBuild ? 'css-loader' : 'css-loader?minimize' },
                    { test: /\.(png|jpg|jpeg|gif|svg)$/, use: 'url-loader?limit=25000' }
                ]
            },
            plugins: [
                new webpack.DefinePlugin({ IS_DEV_BUILD: JSON.stringify(isDevBuild) }),
                new webpack.DllReferencePlugin({
                    context: __dirname,
                    manifest: require('./wwwroot/dist/vendor-manifest.json')
                }),
                new AureliaPlugin({ aureliaApp: 'boot' })
            ].concat(isDevBuild ? [
                new webpack.SourceMapDevToolPlugin({
                    filename: '[file].map', // Remove this line if you prefer inline source maps
                    moduleFilenameTemplate: path.relative(bundleOutputDir, '[resourcePath]'// Point sourcemap entries to the original file locations on disk
                })
            ] : [
                new webpack.optimize.UglifyJsPlugin()
            ])
        }];
    }
    • Lines #6 and #7: the exported function is called by WebPack and receives the command line arguments (if any). This is used later on to perform small tweaks depending if we are creating debug or production bundles.
    • Line #10 declares a single bundle named ‘app’ and declares the entry point as ‘aurelia-bootstraper’. This is a separate npm module from the aurelia-framework, which is in charge of starting the application on the client. This is the module that is in charge of locating the ‘aurelia-app’ element in the page as described on this previous post. In short, all the Aurelia npm modules are included in the ‘vendor’ bundle, except for this one which is included in the ‘app’ bundle.
    • Line #11 declares both .ts and .js as known extensions, so that our code can issue import statements without needing to define an extension. Additionally it declares that WebPack should search for modules in the ‘node_modules’ directory (the default) and in the ‘ClientApp’ directory (which is where all our client side code will live inside the project directory).
    • Line #15 defines that all the assets will be written to the ‘www/dist’ folder and the bundle will be named ‘app.js’.
    • Line #20 defines how to load all the modules that are not javascript files:
      • All typescript files will use the ‘ts-loader’ which will compile the files down to javascript using the ‘tsconfig.json’ file that is included on the root of the project.
      • All html files will use the ‘html-loader’, which can process html files so that any images loaded by the <img> tags are loaded using a module.
      • Related to the previous bullet, all images use the ‘url-loader’, which can automatically move the image to the output directory and depending on the image size it can inline the content.
      • Finally, the CSS files use the ‘css-loader’ and depending if we are running a debug or production build, it will optionally minimize the CSS.
    • Line #28 defines all the plugins to use:
      • First it uses the DefinePlugin to write a global variable named ‘IS_DEV_BUILD’ so that client side code can read it to make decisions whether it is running on debug or production.
      • Then it uses the DllReferencePlugin to consume the manifest that was created as part of the vendor bundle, as described on my previous post.
      • Next comes the AureliaPlugin… which is shrouded in mystery. I don’t know exactly what this plugin does. My guess is that this is how we tell WebPack where to start discovering the application modules. Remember that the entry point was defined just as the ‘aurelia-bootstrapper’ module, so here we tell the plugin that our code graph begins in a module called ‘boot’.
      • Then it checks if this is running in debug mode and optionally uses the SourceMapDevToolPlugin to write script maps to the output folder to enable source code stepping from browser.
      • Lastly, if this is running in production mode, it will minify and compress the generated javascript bundle.

    Phew. At this point we are done with WebPack, we have the app running and can start to develop the application. However, before doing that let’s make a big pit stop and talk about how the application is going to be deployed.

    0

    Add a comment

  10. In the previous post of this series I wrote how to use a DialogPage to integrate settings into VisualStudio’s options dialog and use the settings store to persist settings for the extension. The next step is to make use of the settings from the extension itself. For example, when a user control is loaded, it needs to read which theme is selected from settings to apply the appropiate ResourceDictionaries. The control’s constructor looks like this:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    public partial class JiraIssuesUserControl : UserControl
    {
        private readonly JiraIssuesUserControlViewModel _viewModel;
        private readonly ThemeManager _themeManager;
     
        public JiraIssuesUserControl(JiraIssuesUserControlViewModel viewModel)
        {
            InitializeComponent();
     
            this.DataContext = this._viewModel = viewModel;
     
            var theme = viewModel.Services.VisualStudioServices.GetConfigurationOptions().Theme;
            _themeManager = new ThemeManager(this.Resources, theme);
        }
    }
    The user control receives its ViewModel as a dependency, as well as a reference to an instance of ‘VisualStudioServices’ that we can use to get the settings from the host. Here is the implementation:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    19
    20
    21
    22
    public class VisualStudioServices : IVisualStudioServices
    {
        private readonly DTE _env;
     
        public VisualStudioServices(DTE environment)
        {
            this._env = environment;
        }
     
        public IJiraOptions GetConfigurationOptions()
        {
            var result = new JiraOptions();
            var properties = this._env.get_Properties(JiraOptionsDialogPage.CategoryName, JiraOptionsDialogPage.PageName);
     
            if (properties != null)
            {
                result.MaxIssuesPerRequest = (int)properties.Item("MaxIssuesPerRequest").Value;  
            }
     
            return result;
        }
    }
    Notice here how the properties can be extracted directly from the DTE interface provided by VisualStudio. This is different than the JSON string that was used to serialize the settings to the SettingsStore that was shown on the previous post, VisualStudio first loads the settings from the store to hydrate the DialogPage object, but then has this API to access individual properties from the object. This is very strange design in my opinion, I felt like I had already wrote code to de-serialize settings into my own object, but now I have to write it again using a completely different API. Anyway, to close down this post, let’s see how the VisualStudio host create the instance of the VisualStudioServices class:
    1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
    17
    18
    public sealed class VSJiraPackage : Microsoft.VisualStudio.Shell.Package
    {
        public static VSJiraServices Services;
     
        protected override void Initialize()
        {
            DTE env = (DTE)GetService(typeof(DTE));
     
            Services = new VSJiraServices()
            {
                VisualStudioServices = new VisualStudioServices(env, this)
            };
     
            base.Initialize();
             
            ...
        }
    }
    The only relevant thing here is the GetService method to get an implementation of the DTE interface. VisualStudio has its own IOC container going on and this is the way to get a hold of all sort of services when the extension loads. In my case, I just keep a static instance of all the services that the extension can use and this is the object used whenever a WPF Window or UserControl is loaded into VisualStudio.

    You can download the extension from the VisualStudio Marketplace or visit the project page to browse the source code.

    0

    Add a comment

Loading