Author: function

Tag Azure VMs and Resources

Tagging in Azure is a massively useful feature.  I have customers who are interested in identifying resources for billing but they are also a very useful tool for control.  Resources can be grouped by tag and then a script can be used to apply a function to all machines or services with the same tag.

In the example below I call a variable that looks for Azure resources where the type is identified as a Microsoft virtual machine.  Calling this function enables me to extract a range of information. (I fact this script then goes on and uses the ResourceId too)

As referenced in Using tags to organize your Azure resources tags are updated as a whole so if you want to add additional tags you first have to call the existing tags.  In the example below I am adding the new tag to my existing tags.

Finally we are looping this for each vm and applying via a set command.

Hope this is of use to you, happy tagging :-)!


$FindVMs = Find-AzureRmResource | where {$_.ResourceType -like "Microsoft.Compute/virtualMachines"}
$Tags = (Get-AzureRmResource -ResourceId $ResourceId).Tags
$Tags += @{ Owner = "wade" }

Foreach ($vm in $Findvms)
{
$ResourceId = $VM.ResourceId
Set-AzureRmResource -Tag $Tags -ResourceId $ResourceId -Force
}

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Auto Shutdown ASM Virtual Machines in Azure | Azure Automation

I put together a quick script to auto shutdown tagged ARM VMs.

There are many people still running ASM VMs and why wouldn’t you they are still supported (as of 9/2016).

The process is not much different and in fact now Azure Automation enables a RunAs account at set up its much easier to configure.

In the example below I have tacked on changes to the Azure Automation Team’s sample script, one of four created for you when you enable the feature.

<# .DESCRIPTION An example runbook which gets all the Classic VMs in a subscription using the Classic Run As Account (certificate) and then shuts down running VMs .NOTES AUTHOR: Azure Automation Team + Jonathan Wade LASTEDIT: 28-08-2016 #>

$ConnectionAssetName = "AzureClassicRunAsConnection"
$ServiceName = "wadeclassiv01"
	
# Get the connection
$connection = Get-AutomationConnection -Name $connectionAssetName        

# Authenticate to Azure with certificate
Write-Verbose "Get connection asset: $ConnectionAssetName" -Verbose
$Conn = Get-AutomationConnection -Name $ConnectionAssetName
if ($Conn -eq $null)
{
    throw "Could not retrieve connection asset: $ConnectionAssetName. Assure that this asset exists in the Automation account."
}

$CertificateAssetName = $Conn.CertificateAssetName
Write-Verbose "Getting the certificate: $CertificateAssetName" -Verbose
$AzureCert = Get-AutomationCertificate -Name $CertificateAssetName
if ($AzureCert -eq $null)
{
    throw "Could not retrieve certificate asset: $CertificateAssetName. Assure that this asset exists in the Automation account."
}

Write-Verbose "Authenticating to Azure with certificate." -Verbose
Set-AzureSubscription -SubscriptionName $Conn.SubscriptionName -SubscriptionId $Conn.SubscriptionID -Certificate $AzureCert 
Select-AzureSubscription -SubscriptionId $Conn.SubscriptionID

# Get cloud service
    
$VMs = Get-AzureVM -ServiceName $ServiceName

    # Stop each of the started VMs
    foreach ($VM in $VMs)
    {
		if ($VM.PowerState -eq "Stopped")
		{
			# The VM is already stopped, so send notice
			Write-Output ($VM.InstanceName + " is already stopped")
		}
		else
		{
			# The VM needs to be stopped
        	$StopRtn = Stop-AzureVM -Name $VM.Name -ServiceName $VM.ServiceName -Force -ErrorAction Continue

	        if ($StopRtn.OperationStatus -ne 'Succeeded')
	        {
				# The VM failed to stop, so send notice
                Write-Output ($VM.InstanceName + " failed to stop")
	        }
			else
			{
				# The VM stopped, so send notice
				Write-Output ($VM.InstanceName + " has been stopped")
			}
		}
    }


Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

PowerShell to list ASM VMs, Size and CloudService | Azure

I’ve been asked several times now to help customers running IaaS VMs in the Classic mode (ASM Mode) analyse their resources.

The following will list your classic VMs out by CloudService, VMName and VMSize to a CSV file.

You’ll need to edit the location of the file.

Please note this is not meant to be a very sophisticated script just to get you to the answer.

Happy hunting!


Add-AzureAccount
$services = Get-AzureVM | Group-Object -Property ServiceName
foreach ($service in $services)
{
   $out = @()
foreach ($VM in $service.Group)
{ 
$VMSizes = $VM.InstanceSize 

    $props = @{
        Cloudservice = $service.Name        
        VMName = $VM.Name
        VMSizes = $VMSizes
        }
    
    $out += New-Object PsObject -Property $props
}
   $out | Format-Table -AutoSize -Wrap CloudService, VMName, VMSizes
   $out | Export-Csv c:\scripts\test.csv -append
} 
 

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Creating a VM from an Azure Image | Azure

Working with Azure in the enterprise means you will quickly want to create your own custom images.  In this introductory article I will show you an example of how to create an image from an existing generalized imaged.

Please note:

  • This is utilising the ARM model and does not apply to Classic.
  • This assumes you have created a generalized image in Azure and know where it is!
  • This process is not considering on premises VMs.
  • This process uses Windows images.

The following documents and articles were used to create the script below.  Many thanks to the efforts and hard work of the authors.

Create a Virtual Machine from a User Image by Philo

Upload a Windows VM image to Azure for Resource Manager deployments by Cynthia Nottingham

Cynthia shows how to create the image and find the URL of the uploaded image.  She also gives detailed examples of the PowerShell scripts required to create the new VM.

Philo uses variables for existing networks which I found very useful and just comment out the pieces I do not need, e.g. when the vnet already exists.

Happy VM creating!


$cred = Get-Credential
$rgName = "ResourceGroupName"
$location = "Azure Location"
$pipName = "Public IP address Name"
$pip = New-AzureRmPublicIpAddress -Name $pipName -ResourceGroupName $rgName -Location $location -AllocationMethod Dynamic
$subnet1Name = "Subnet Name"
$vnetSubnetAddressPrefix = "Subnet address e.g. 10.1.0.0/24"
$vnetAddressPrefix = "vnet address e.g. 10.1.0.0/16"
$nicname = "Name of Nic"
$vnetName = "Name of vnet"
$subnetconfig = New-AzureRmVirtualNetworkSubnetConfig -Name $subnet1Name -AddressPrefix $vnetSubnetAddressPrefix
#$vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location -AddressPrefix $vnetAddressPrefix -Subnet $subnetconfig
$nic = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $rgName -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id
$vmName = "Name of VM"
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize "Standard_A4"
$computerName = "Nameof Cumputer"
$vm = Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -ComputerName $computerName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$osDiskName = "Name of Disk"
$osDiskUri = '{0}vhds/{1}{2}.vhd' -f $storageAcc.PrimaryEndpoints.Blob.ToString(), $vmName.ToLower(), $osDiskName
$urlOfUploadedImageVhd = "URL to generaized image https://somename.blob.core.windows.net/system/Microsoft.Compute/Images/templates/name-osDisk.00aaaa-1bbb-2dd3-4efg-hijlkmn0123.vhd"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption fromImage -SourceImageUri $urlOfUploadedImageVhd -Windows
$result = New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm
$result

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Auto Shutdown ARM Virtual Machines in Azure | Azure Automation

With the release of the RunAs feature in Azure Automation.  A service account can now be called in Azure Automation scripts to enable and run Resource Manager functions.

I think one of the most useful features of this is to auto shutdown virtual machines by TAG.

In the example below I have set up the following:

  • A TAG called “Environment” with and ID of “Lab” and applied to each VM I want to control.
  • A RunAs service account as part of my AzureAutomation resource.
  • A PowerShell Workflow script to scan for the TAGs applied to Windows virtual machines and to shut them all down in parallel.
  • An Azure Automation schedule to run Monday-Friday at 17:00 to call the published workflow.

When configuring a schedule through the browser it uses the browser’s local time zone.  This is stored in UTC but is converted for you.   You’ll need to consider this is managing multiple resources across the globe.

This workflow and process has been created via the Azure Portal.

The -Parallel feature is only available in Powershell Workflows.

workflow TestPowershellworkflow
{
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Add-AzureRMAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint

    $FindVMs = Find-AzureRmResource -TagName "Environment" -TagValue "Lab" | where {$_.ResourceType -like "Microsoft.Compute/virtualMachines"}

   Foreach -Parallel ($vm in $Findvms)

   {
    $vmName = $VM.Name
    $ResourceGroupName = $VM.ResourceGroupName

    Write-Output "Stopping $($vm.Name)";
    Stop-AzureRmVm -Name $vm.Name -ResourceGroupName $ResourceGroupName -Force;
    }
}

I’d be very interested in your feedback.  Happy automating.

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Moving Resources between Resource Groups | Azure

From a best practise point of view, I try to use Resource Groups as containers for all items I consider as part of the same lifecycle.  This allows me to remove, delete and recreate what I need without fear of losing a component and generally allows me to be more efficient.  I like peace of mind.

However, when it comes to best practise sometimes it feels slow and cumbersome.  “This is the Cloud I don’t want to be held back by anything” I shout from the roof tops in my superhero pyjamas!

The truth is best practise and / or procedure shouldn’t get in the way of anything at all but if I’m just spinning up a few resources to test a lab someone has sent me on a payment gateway or vNet to vNet VPN set up, this pyjama wearing superhero isn’t waiting around for anyone.   I therefore plough ahead.

Thankfully for rash people like me the Move-AzureRmResource command comes to the rescue.  https://msdn.microsoft.com/en-us/library/mt652516.aspx

For single resources I would first get the details of the resource you would like to move, this is important because you may have named two resources the same (remember what I was saying about being rash).

Get-AzureRmResource -name "resourcename" -ResourceGroupName "resourcegroupname"

Once you have the details you can then specify the ResourceType in the variable call.

Set up the variable as follows with the ResourceType, ResourceGroupName and ResourceName. Then move the resource.

$Resource = Get-AzureRmResource -ResourceType "Microsoft.Network/virtualNetworks" -ResourceGroupName "resourcegroupname" -ResourceName "resourcename"
Move-AzurermResource -ResourceId $Resource.ResourceId -DestinationResourceGroupName "newresourcegroupname"

Please note: It will move dependencies, for example if you want to move a VM it will move the components such as Public IP and Network Security Group.  As is, you will be prompted that you want to move the resource and the associated resources, if they exist.

Moving resources is simple and easy.  Best practise is important and no Cloud architect should be seen in public in their superhero pyjamas!

Azure Commander over and out……

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Public Cloud Availability

If you are in the Australian IT industry it will not have escaped your notice that AWS has had an outage in a single availability zone in Sydney.  This has created a large amount of coverage online, some outrage, some opinion and a lot of WTF moments.

The volume of coverage goes to show how successful public clouds have become.

I cannot comment on the infrastructure of any public cloud let alone AWS. Before I worked at Microsoft I certainly used AWS and am familiar with it.  But I cannot and would not like to comment on how it is built and managed, the truth is like the majority I do not know and to be honest I don’t care. Amazon is a very large, very successful company and therefore it’s easy to assume they do a lot very right indeed.

I’m sure they have fully redundant systems, like all public clouds, and probably more redundant systems than you or I could shake a stick at.  However nothing is 100% and the availability of any system is the aggregate of the provided uptime of each component.  If that aggregate does not meet your business requirements you need to architect in redundancy to meet the accepted risk level the business is prepared to tolerate.

How many of these environments were too reliant upon a single data centre?  Too many by the looks of things.

When natural disasters hit they do tend to impact a city hard.  Look at Sydney today, Brisbane a few years ago.

Your cloud deployments need to be different.  You must architect for cross region redundancy or make the business aware of the risk.  Let the business requirements drive the strategy, cloud, hybrid or multi vendor.  After all it’s what we are here for to drive successful business outcomes.

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Moving On

The year moved very quickly for me indeed.  Many blog posts sit in draft; titled but not executed.  Q3 became a blur of events, installs, upgrades and deals.

I always considered myself incredibly lucky to be in a pre-sales role and to be at Citrix. After all I have talked about being at the vendor that you have built your career on before and there I was. However, I began to become increasingly aware of my customers’ desire to move to the cloud and the exciting projects they were undertaking.   At Citrix I was a small part of these as they looked to move workloads onto a cloud platform, be that PaaS, IaaS or SaaS.

Citrix has parts to play in all of this but I was keen to spend more time focused on these transformational investments.  I was determined to learn and engage more directly.  Therefore, when the opportunity to work at Microsoft presented itself I took it with both hands.

Like everything luck, in terms of timing, played a part; Microsoft were looking to expand their team just at the point I thought I had enough experience to apply for the position.  If I’m honest I’ve not been through a more testing, challenging and enjoyable process.  (If you don’t get anything out of being grilled and challenged, then you probably will not find it enjoyable)

So where to from here?  My plan is to share my Azure journey this year on here and to walk through the tech steps I take on projects.  Please note this will be very much my own experiences and there are a number of excellent blogs, tech sites and channels from Microsoft dedicated to technical tips and tricks, troubleshooting etc. that you can access.  I will look to create a resources page that I use on my journey.

For now, over and out from XenCommder Wade and hello Commander Azure!

It’s Q2 Already WTF!

I’ve learned that sales quarters roll round real fast.  I spent some time of my life studying physics, very much at an
FreeImages.com/BSKelementary level.  Time was a topic I always liked the idea of but the academics that tried to express relativity always tripped me up in explanations of travelling trains and view points, they should have given me a quota and a deadline, then worked back to the mathematics!   RFQ response deadlines loom far too quickly and customer’s purchasing departments move in a different time, yet the 24 hours in a day appear to tick over at an alarmingly regular rate and the end of quarter comes just as it should.

For me one month has past of Q2, deals are forming shape and pricing is being negotiated.  This can be a great time if you are in front and it’s easy to get carried away with how the year might pan out.  You may be behind (I know it’s the sales managers fault for setting your silly quota but I can’t change that) and trying to work out how you can climb what is already looking like a mountain.

In either position now as an SE is time to look at Q4.  How are you going to close the year out, how are you going to contribute to the greater good of your quota, where is the extra value that you bring?

You have to assist the deals on the go, you have to work on the company projects and the last thing you probably need is additional work but Q4 is where it is at and your influence is now.

Surprise your rep and book a Q4 planning meeting.  Take time to look at your account list and work out who you can meet, who you are not currently speaking to.  Grab that coffee, take a note book (how old fashioned am I – tablet!) and listen.

Remember you have the best job in the world, go out and enjoy it because as an SE time moves very fast indeed!

 

Images supplied from http://www.freeimages.com/license

Losing your Religion

The new year has begun, in truth we are well into it, accounts and numbers have been finalised, processes, personnel and strategy have been reviewed and through all this the job continues.

I’d like to take a little look back at Q4, the role of the SE in Q4 and what follows can be an interesting one.  It’s the time of year I am at the mental edge of the role, I spend more time thinking about decisions and actions than any other.  With a smirk I’m sure you will have had a sales lead say “this is the most important week, of the most important month, of the most important quarter.”  Every sale is important to a sales team, as the year closes out they gather momentum.
http://www.freeimages.com/photographer/maxpate-62416For some Q4 will have shuddered to a halt, the final days might have felt like riding a truck with failed breaks, turning the wheel at this late stage is not going to change the end result.  And this is the key to it all, you are not in the final throws as an SE going to change the overall result.  Your work should have been done.  You can and should support and help; start the planning process, take a meeting with a partner or two make a cup of tea for the sales admin team.  (As much as reps think they work the hardest at the end of the quarter it’s actually those that have to process and book all the orders they have thrown together that do – all other times its us ;-))

There will be things you would have done differently through out the year and there will be decisions moving forward taken by management, that you don’t agree with.  This however is no time to judge your role by, this is no time to lose the SE religion. Before you know it you will be back in the wagon, the engines will have started, breaks checked and the GPS guiding the way.

Take a breath and remember you are part of the greatest community on earth, who doesn’t want to be an SE!