Steve Spencer's Blog

Blogging on Azure Stuff

Creating a Scheduled Web Job in Azure

It’s been a while since I’ve talked about web jobs, but they are still around.I needed to modify one of mine recently and configure it as a scheduled web job.

You can deploy your web job from the Azure Portal. Web jobs are part of App Services and are deployed by selecting your app service you want and clicking the web jobs service.

image

Click the Add button.

image

Enter a name for your web job, Browse a zip or exe containing your web job

Select Triggered from the Type drop down. This change the UI to allow you to select Scheduled

image

The Schedule is triggered using a CRON expression.

Alternatively, configure the web job as a manual trigger, then use the Azure Scheduler to trigger the web job.. When you have configured your web job, click on its properties in the Azure Portal and you should see a web hook url.

https://yourwebapp.scm.azurewebsites.net/api/triggeredwebjobs/yourwebjob/run

Now create a new scheduler Job

image

Select the method as Post and paste in the webhook url.Once you've completed this configuration you can then configure the schedule

image

Using the scheduler allows you to configure retry policies and also error actions

Azure Relay Hybrid Connections

If you are using the Azure App Service to host your web site and you want to connect to an on-premises server then there a number of ways you can do this. One of the simplest is to use the hybrid connection. Hybrid connections have had a bit of a revamp lately and they used to require a BizTalk service to be created, now you just need a Service Bus Relay. You can generally use the hybrid connection to communicate to your back end server over TCP and you will need to install an agent on your server (or a server that  can reach the one you want to connect to) called the Hybrid Connection Manager (HCM). HCM will make an outbound connection to the Service Bus Relay over ports 80 and 443, so you are unlikely to need firewall ports changing.

Hybrid connections are limited to a specific server name and port and your code in the Azure App Service will address the service as if it was in your local network, but will only be able to connect to the machine and port configured in the Hybrid Connection. Instructions for configuring your hybrid connection and HCM are here.

I have setup a number of the old BizTalk style hybrid connections and the new way is a lot easier to do. I ran into a few connectivity issues when I first created the Relay hybrid connection and there were a few things I found that helped me to find out where the issues were. Firstly the link I provided to configure the hybrid connection has a troubleshooting section which talks about tcpping. You can run this in the debug console in Azure and it will check to see if your HCM is talking to the same relay as the one in your app service. To get to the debug console, log in to your azure portal, select the app service you want to diagnose. Scroll down to Advanced Tools and click Go.

image

This will take you to your Kudu dashboard where you can do a lot of nice things, such as process explorer, diagnostic dumps, log streaming and debug console

The address will be https://[your namespace].scm.azurewebsites.net/

The debug console will allow you to browse and edit files directly in your application without the need to ftp. This is really useful when trying to check configuration issues.

If you want to check connectivity from your server machine to the Azure Relay then you can use telnet. You might need to add the telnet feature to Windows by using:

dism /online /Enable-Feature /FeatureName:TelnetClient (From https://www.rootusers.com/how-to-enable-the-telnet-client-in-windows-10/)

in a command prompt type

telnet [your relay namespace].servicebus.windows.net 80 or

telnet [your relay namespace].servicebus.windows.net 443

Then a blank screen denotes successful connectivity (from: https://social.technet.microsoft.com/wiki/contents/articles/2055.troubleshooting-connectivity-issues-in-the-azure-appfabric-service-bus.aspx)

You can also use PowerShell to check:

Test-Netconnection -ComputerName [your relay namespace].servicebus.windows.net -Port 443

This all checks that you are connected to the relay, the final thing you need to check is whether you can actually resolve the dns of the target service from the server where HCM is running. This needs to be the host name of the server and not the fully qualified name. This also needs to match the machine name you configured in the hybrid connection.The easiest way to do this for me was to put the address of WCF service I wanted to connect to into a browser on the machine running HCM.

Hopefully I’ve given you a few pointers to help identify why your hybrid connection does not connect.

Custom ASP.NET MVC app running in a Container on Service Fabric

In an earlier post, I talked about how to create a Docker container on Windows that housed a custom ASP.Net MVC app. What I want to show now is how you can get this container running in Service Fabric.

I created 3 identical virtual machines all capable of running Docker as in my earlier post. Now I needed to make my three VMs into a Service fabric cluster. These two posts explain how:

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-get-started

https://docs.microsoft.com/en-us/azure/service-fabric/service-fabric-cluster-standalone-deployment-preparation

My 3 VMs are called sf0, sf1 & sf2 and I needed to  put these into my cluster config. I picked the ClusterConfig.Unsecure.MultiMachine config file that comes with the Service Fabric files and changed it to include my 3 VMs, so my nodes look like this:

"nodes": [
{
      "nodeName": "sf0",
      "iPAddress": "sf0",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc1/r0",
      "upgradeDomain": "UD0"
},
{
      "nodeName": "sf1",
      "iPAddress": "sf1",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc2/r0",
      "upgradeDomain": "UD1"
},
{
      "nodeName": "sf2",
      "iPAddress": "sf2",
      "nodeTypeRef": "NodeType0",
      "faultDomain": "fd:/dc3/r0",
      "upgradeDomain": "UD2"
}
],

I then remoted onto one of the machines and ran the following PowerShell:

.\TestConfiguration.ps1 -ClusterConfigFilePath .\ClusterConfig.json

This will check all the machines in the ClusterConfig.json file to see if they are configured correctly and report any errors. I got the following error:

Machine 'sf2' is not reachable on port 445. Check connectivity/open ports. Error: A connection attempt failed because the connected party did not properly respond after a period of time, or established connection failed because connected host has failed to respond 192.168.1.222:445

This meant I needed to open the correct firewall ports on my VM. I got this error for all the machines in the cluster. Once I fixed this and reran the PowerShell, the tests passed which meant I could install Service Fabric on each of the machines as follows:

.\CreateServiceFabricCluster.ps1 -ClusterConfigFilePath .\ClusterConfig.json –AcceptEULA

When this completes successfully you should see something like this:

Your cluster is successfully created! You can connect and manage your cluster using Microsoft Azure Service Fabric Explorer or Powershell. To connect through Powershell, run 'Connect-ServiceFabricCluster

I could connect to Service Fabric Explorer using: http://sf0:19080

Now I have my cluster running I needed to create a Service Fabric App and deploy it to the cluster. Make sure that you have installed the Service Fabric SDK, then run Visual Studio. Create a new Service Fabric project. When the project is created, right click on the services node, Select Add->New Service Fabric Service

clip_image001[4]

Then pick Guest Container and enter the name in your Docker Hub repository where your Docker image resides.

clip_image001

This will add in the necessary files to your service fabric project. If you remember from my earlier post, the website was hosted on port 8000 of the container. We need to tell service fabric about this and also we may want to map this to a different port.

If you open the containers ServiceManifest file

clip_image001[8]

Add an endpoint with the endpoint you want Service Fabric to use to publish the website out

clip_image001[10]

In this example I’m using the same port. If you want to map the port to a different one then changes this to something else e.g. If I wanted to use http://sf0:8080 as the website then I would change the Service Manifest to this:

image

You also need to tell service fabric about the Container port that is published. This is done in the application manifest file:

image

This is set to 8000 as that is the port exposed by the Docker container

Now deploy your application to service fabric. It may take a while to initialise your container as it will need to be downloaded from Docker Hub before it will run. Once it is running you should see it as Ready in the Service Fabric Explorer

image

Error updating SSL certificates in Azure App Services

I was asked to update the SSL certificates on a website that was hosted in Azure Web Apps. No problem I thought.

Go to the Azure Portal,

Select the website you want to update.

When the blade appears scroll down the left panel and select SSL Certificates

image

image

 

remove the binding, by clicking … at the end of the binding row and select Delete

image

Now remove the certificate by clicking .. at the end of the certificate row and select Delete

This is where I got an error

image

It took a short while to resolve this.

I tried a few things like restarting the site & checking the staging slot, but I still got the error. Finally, I checked other sites in the same app service plan and I had the same certificate used for another Web App (both using the same domain url). Once I removed the binding from that site, I could delete the certificate and upload a new one. I had to then add the new bindings to both sites.

Custom ASP.NET MVC app running in a Windows Container

With the introduction of Windows Containers on  Window Server 2016 and the ability to run containers in Service Fabric I thought it was time to investigate Windows Containers and I wanted to know how to build one that will run a web site using IIS.

As I’m new to containers, although I’ve done a very similar exercise with Docker on Linux, I decided to follow the Windows Quick Start Guide. I hit a few problems early on so I’ve put the steps I followed here:

After opening a PowerShell window as administrator I ran the following commands:

Install-Module -Name DockerMsftProvider -Repository PSGallery –Force – Ran OK
Install-Package -Name docker -ProviderName DockerMsftProvider – Had an error
WARNING: Cannot verify the file SHA256. Deleting the file.
WARNING: C:\Users\ADMINI~1.DEV\AppData\Local\Temp\DockerMsftProvider\Docker-1-12-2-cs2-ws-beta.zip does not exist
Install-Package : Cannot find path 'C:\Users\ADMINI~1.DEV\AppData\Local\Temp\DockerMsftProvider\Docker-1-12-2-cs2-ws-beta.zip' because it does not exist.

 

Not sure what was causing this to fail but I followed the instructions to manually install (from https://github.com/OneGet/MicrosoftDockerProvider/issues/15)

Start-BitsTransfer -Source https://dockermsft.blob.core.windows.net/dockercontainer/docker-1-12-2-cs2-ws-beta.zip -Destination /docker.zip
Get-FileHash -Path /docker.zip -Algorithm SHA256
mkdir C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cp .\docker.zip C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cd C:\Users\Administrator\AppData\Local\Temp\DockerMsftProvider\
cp .\docker.zip Docker-1-12-2-cs2-ws-beta.zip
Install-Package -Name docker -ProviderName DockerMsftProvider -Verbose Restart-Computer –Force

After Rebooting I tried to download and run a sample container

docker run microsoft/dotnet-samples:dotnetapp-nanoserver

but I got the following error

docker : C:\Program Files\Docker\docker.exe: error during connect: Post http://%2F%2F.%2Fpipe%2Fdocker_engine/v1.25/containers/create: open //./pipe/docker_engine: The

system cannot find the file specified..

At line:1 char:1

+ docker run microsoft/dotnet-samples:dotnetapp-nanoserver

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

+ CategoryInfo : NotSpecified: (C:\Program File...ile specified..:String) [], RemoteException

+ FullyQualifiedErrorId : NativeCommandError

It turns out that the docker service wasn’t running after the reboot, so open services.msc and find the docker service to start it.

Running the same command again will download the image from docker hub, create a container from it and then run it. As this is a visual container it runs once and then stops.

clip_image001

Every time I do docker run microsoft/dotnet-samples:dotnetapp-nanoserver it creates a new container. What I want to do is to run one that is stopped and view the output on the screen.

For this I needed to start the container I had already created. If you run

docker –ls –a

This will list all the containers that are both running and stopped and you can see from the image below that I had run docker run a number of time. Each time it tried to download the image (which was already downloaded) and then create a new container from it.

clip_image001[6]

docker container start -a 12d382ae0bd6 (-a attached STDOUT so you can see the output)

clip_image001[8]

Now I know how to create and start containers I wanted to build one of my own. This is easier than I first though as there a lots of base templates stored on docker hub and git hub.

https://docs.microsoft.com/en-us/virtualization/windowscontainers/samples#Application-Frameworks

https://hub.docker.com/r/microsoft/

I picked one on docker hub that has IIS and ASP.Net installed already, so all I needed to do after was to add my own website and configure IIS correctly. Using docker pull,

docker pull microsoft/aspnet (see https://hub.docker.com/r/microsoft/aspnet/)

This retrieves the template from Docker Hub and I want to use that template to install my ASP.Net MVC site and configure IIS to serve the pages on port 8000. Following the instructions here (https://docs.microsoft.com/en-us/dotnet/articles/framework/docker/aspnetmvc). I published my MVC site and copied the publish folder to my docker machine. Then I needed to create a Dockerfile recipe to instruct docker what to install in my image. So I created a folder that contained the Dockerfile and also the published website as below

clip_image001[10]

The contents of the Dockerfile are:

# The FROM instruction specifies the base image. You are
# extending the microsoft/aspnet image.
FROM microsoft/aspnet
# Next, this Dockerfile creates a directory for your application
RUN mkdir C:\sdsweb
# configure the new site in IIS.
RUN powershell -NoProfile -Command \
Import-module IISAdministration; \
New-IISSite -Name "sdsweb" -PhysicalPath C:\sdsweb -BindingInformation "*:8000:"
# This instruction tells the container to listen on port 8000.
EXPOSE 8000
# The final instruction copies the site you published earlier into the container.
ADD sdswebsource/ /sdsweb

Now I need to run this to create the image

In PowerShell, I changed directory to the folder containing my Dockerfile, then ran

docker build -t sdsweb .

This has created an image and we need to now get this running as a container

clip_image001[12]

using docker run again

docker run -d -p 8000:8000 --name sdsweb sdsweb

clip_image001[14]

My container is now running and I should be able to view the web pages in my browser on port 8000, but I need to know the IP address first

docker inspect -f "{{ .NetworkSettings.Networks.nat.IPAddress }}" sdsweb

clip_image001[16]

Now I can browser to http://172.17.97.235:8000

clip_image001[18]

I changed the default web page to show the machine name serving the pages under Getting Started. Listing the containers will show the container ID and this matches the machine name displayed on the web page

image

That’s it running in a container. There are a couple more things I’d like to do before I’ve finished. The first is to make sure that when my Windows Server restarts, then my sdsweb container also starts. At the moment it will not start as I didn’t add a restart parameter when I called docker run. Adding –restart always will cause the container to restart when windows restarts.

docker run -d -p 8000:8000 --name sdsweb --restart always sdsweb

The final thing I want to do is to be able to share this image so I’d like to push it up to docker hub

docker login - enter username and password

docker push recneps/sdsweb

clip_image001[20]

Then to use it on another machine

docker pull recneps/sdsweb

clip_image002

In my next post I am going to look at how I can create a container that can be hosted in Service Fabric

Diagnosing Azure Logic App Faults

In my previous post I showed you how to create a flat file decoding Logic App. When I first tested this I used URL generated when I first saved the Logic App to create a post command using the compose feature of fiddler.

image

The body of the message contained a single cvs row that I wanted to debug and I was hoping to see the response containing the xml representation of the csv file.

When I submitted the request, fiddler displayed an 502 error – Bad Gateway

image

This wasn’t very helpful and I needed to understand what had gone wrong.

Navigating to the Logic App in the Azure Portal I noticed that the Overview blade showed the runs for the Logic App

image

Clicking the failed run opened a new blade in the portal which showed my Logic App and it failing at the Flat File Decoding stage

image

Clicking the Flat File Decode caused it to expand and show me a more detailed (and useful) error

image

The error told me that the Flat File Decode couldn’t find the CR/LF at the end of the line. As I’d put a single line of text and no CR/LF the decoding failed.

I repeated the POST, this time with the CR/LF at the end of the body and the Logic App worked correctly and the correct xml representation of the csv was displayed in the body of the response.

Processing a flat file with Azure Logic Apps

A lot of companies require the transfer of files in order to transact business and there is always a need to translate these files from one format to another. Logic Apps provides a straight forward way to build serverless components that provide the integration points into your systems. This post is going to look at Logic apps enterprise integration to convert a multi-record CSV file into and XML format. Most of the understanding for this came from the following post:

https://seroter.wordpress.com/2016/09/09/trying-out-standard-and-enterprise-templates-in-azure-logic-apps/

Logic Apps can be created in Visual Studio or directly in the Azure Portal using the browser. Navigate to the azure portal https://portal.azure.com click the plus button at the top of the right hand column, then Web + Mobile then Logic App

clip_image002

Complete the form and click Create

clip_image003

This will take a short while to complete. Once complete you can select the logic app from you resource list to start to use it.

If you look at my recent list

clip_image005

You can see the logic app I’ve just created but you will also see my previous logic app and you will also notice that there is also an integration account and an Azure function. These are both required in order to create the necessary schemas and maps required to translate the CSV file to XML.

The integration account stores the schemas and maps and the Azure function provides some code that is used to translate the CSV to XML.

An integration account is created the same way as a logic app. The easiest way is to click on the plus symbol and then search for integration

clip_image007

Click on Integration Account then Create

clip_image009

Complete the form

clip_image010

Then Create. Once created you can start to add your schemas and maps

clip_image012

You will now need to jump into Visual Studio to create your maps and schemas. You will need to install the Logic Apps Integration Tools for Visual Studio

You will need to create a schema for the CSV file and a schema for the XML file. These two blog posts walk you through creating a flat file schema for a CSV file and also a positional file

I created the following two schemas

clip_image014

clip_image016

Once you have create the two schemas you will need to create a map which allows you to map the fields from one schema to the fields in the other schema.

clip_image018

In order to upload the map you will need to build the project in Visual Studio to build the xslt file.

The schemas and map file project can be found in my repository on GitHub

To upload the files to the integration account, go back to the Azure portal where you previously selected the integration account, click Schemas then Add

clip_image020

Complete the form, select the schema file from your Visual Studio project and click OK. Repeat this for both schema files. You do the same thing for the map file. You will need to navigate to your bin/Debug (or Release) folder to find the xslt file that was built. Your integration account should now show your schemas and maps as uploaded

clip_image021

There’s one more thing to do before you can create your logic app. In order to process the transformation some code is required in an Azure Function. This is standard code and can be created by clicking the highlighted link on this page. Note: If you haven’t used Azure Functions before then you will also need to click the other link first.

clip_image023

clip_image025

This creates you a function with the necessary code required to perform the transformation

clip_image027

You are now ready to start your logic app. Click on the Logic App you created earlier. This will display a page where you can select a template with which to create your app.

clip_image029

Close this down as you need to link your integration account to your logic app.

Click on Settings, then Integration Account and pick the integration Account where you previously uploaded the Schemas and Map files. Save this and return to the logic app template screen.

Select VETER Pipeline

clip_image031

Then “Use This Template”. This is the basis for your transformation logic. All you need to do now is to complete each box.

clip_image032

clip_image034

In Flat File Decoding & XML Validation, pick the CSV schema

clip_image035

In the transform XML

clip_image036

Select the function container, the function and the map file

All we need to do now is to return the transformed xml in the response message. Click “Add an Action” on Transform XML and search for Response.

clip_image038

Pick the Transformed XML content as the body of the response. Click save and the URL for the logic app will be populated in the Request flow

clip_image039

We now have a Request that takes the CSV in the body and it returns the XML transform in the body of the response. You can test this using a tool like PostMan or Fiddler to send in the request to the request URL above.

There is also a test CSV file in my repository on GitHub which can be used to test this.

My next post covers how I diagnosed a fault with this Logic App

TFS Release Manager, Remote PowerShell & errorcode 0x80090322

I’m using Release Manager in Visual Studio Team Services (i.e. the one in the cloud) to deploy to my On Premises backend servers. Release Manager does this by using an agent in the environment within which you want to deploy. You can deploy and configure the agent through Release Manager and instructions are at https://blogs.msdn.microsoft.com/visualstudioalm/2016/04/05/deploy-artifacts-from-onprem-tfs-server-with-release-management-service/

However this requires remote PowerShell configuring on the target machine to work correctly. This you may think is really easy to do.

Run PowerShell as admin then type

Winrm quickconfig

Once configured I had to allow the machine with the agent on to access the target machine

Set-Item wsman:localhost\client\trustedhosts *.<domain name>

And also set up to allow for remote scripts using:

Set-ExecutionPolicy RemoteSigned

Testing this I used:

Enter-PSSession <SERVER NAME>

On all my local VMs this worked fine but as soon as I tried it on my UAT and Production servers I got a generic error which listed a lot of possible problems:

Connecting to remote server <SERVER NAME> failed with the following error message: WinRM cannot process the request.
The following error with errorcode 0x80090322 occurred while using Negotiate authentication: An unknown security error occurred.
Possible causes are:
-The user name or password specified are invalid.
-Kerberos is used when no authentication method and no user name are specified.
-Kerberos accepts domain user names, but not local user names.
-The Service Principal Name (SPN) for the remote computer name and port does not exist.
-The client and remote computers are in different domains and there is no trust between the two domains.
After checking for the above issues, try the following:
-Check the Event Viewer for events related to authentication.
-Change the authentication method; add the destination computer to the WinRM TrustedHosts configuration setting or use HTTPS transport.
Note that computers in the TrustedHosts list might not be authenticated.
-For more information about WinRM configuration, run the following command: winrm help config. For more information, see the about_Remote_Troubleshooting Help topic.

Searching for the error returned a lot of similar results including:

Deleting SPNs

https://social.technet.microsoft.com/Forums/windows/en-US/a4c5c787-ea65-4150-8d16-2a19c569a589/enterpssession-winrm-cannot-process-the-request-kerberos-authentication-error-0x80090322?forum=winserverpowershell

A conflict between ports

http://sharepoint-community.net/profiles/blogs/powershell-remoting-error

Sites to help with troubleshooting

https://blogs.technet.microsoft.com/jonjor/2009/01/09/winrm-windows-remote-management-troubleshooting/

None of them fixed the problem. I managed to get Remote Powershell to work by setting the SPN to specific ports but this then broke Reporting Services so I reverted my changes.

You know you are struggling when all the searches you do yield results you have already read, but in one web site that didn’t appear to be relevant I found this little nugget of information

Remote PowerShell requires port 80 to be available on the Default Web Site”

https://blogs.technet.microsoft.com/exchange/2010/02/04/troubleshooting-exchange-2010-management-tools-startup-issues/

Looking at my web server there was no default website and nothing using port 80 so I added one and remote PowerShell started to work!

I can now deploy from the cloud on to my back end servers without opening any firewall portsJ

Migrating Azure WebJobs to Azure Service Fabric

As part of a proof of concept for Azure Service Fabric one of the challenges was to migrate backend services from a variety of different places. I had a number of services running as Azure Webjobs on the same platform as my web site. The WebJobs were hosted as triggered services meaning that they were using the WebJobs SDK and this has the advantage that the WebJob will run as a console application outside of the Azure Web site it is currently hosted in.

Azure Service Fabric has the capability to run any Windows application that can be run from a command line as a guest executable. This means that I could host my WebJob in Service Fabric as a guest executable.

Once I had Visual Studio setup with the Service Fabric SDK & Tools. It was relatively straight forward to add the WebJob.

As an example, my WebJob is triggered when a message is placed onto an Azure Storage Queue and it then passes the message into an Azure Service Bus Topic. The WebJob project was added to my Service Fabric application

clip_image001

To add this as a Guest Executable, right click on your service node in the Service Fabric application and select “New Service Fabric Service”

clip_image002

When the “New Service Fabric Service” dialog appears, select “Guest Executable”

clip_image004

Click Browse and select your WebJob executable folder. The WebJob executable should now appear in the Program drop down. Select this, change the service name and click OK.

This should add your WebJob as a guest executable to your application package root

clip_image005

Once deployed to a Service Fabric cluster, your WebJob should run as normal. If you leave the connection string settings the same as they are in the WebJob then your diagnostic traces will appear in the same blob container as they are now.