Cloud

Auto Shutdown ASM Virtual Machines in Azure | Azure Automation

I put together a quick script to auto shutdown tagged ARM VMs.

There are many people still running ASM VMs and why wouldn’t you they are still supported (as of 9/2016).

The process is not much different and in fact now Azure Automation enables a RunAs account at set up its much easier to configure.

In the example below I have tacked on changes to the Azure Automation Team’s sample script, one of four created for you when you enable the feature.

<# .DESCRIPTION An example runbook which gets all the Classic VMs in a subscription using the Classic Run As Account (certificate) and then shuts down running VMs .NOTES AUTHOR: Azure Automation Team + Jonathan Wade LASTEDIT: 28-08-2016 #>

$ConnectionAssetName = "AzureClassicRunAsConnection"
$ServiceName = "wadeclassiv01"
	
# Get the connection
$connection = Get-AutomationConnection -Name $connectionAssetName        

# Authenticate to Azure with certificate
Write-Verbose "Get connection asset: $ConnectionAssetName" -Verbose
$Conn = Get-AutomationConnection -Name $ConnectionAssetName
if ($Conn -eq $null)
{
    throw "Could not retrieve connection asset: $ConnectionAssetName. Assure that this asset exists in the Automation account."
}

$CertificateAssetName = $Conn.CertificateAssetName
Write-Verbose "Getting the certificate: $CertificateAssetName" -Verbose
$AzureCert = Get-AutomationCertificate -Name $CertificateAssetName
if ($AzureCert -eq $null)
{
    throw "Could not retrieve certificate asset: $CertificateAssetName. Assure that this asset exists in the Automation account."
}

Write-Verbose "Authenticating to Azure with certificate." -Verbose
Set-AzureSubscription -SubscriptionName $Conn.SubscriptionName -SubscriptionId $Conn.SubscriptionID -Certificate $AzureCert 
Select-AzureSubscription -SubscriptionId $Conn.SubscriptionID

# Get cloud service
    
$VMs = Get-AzureVM -ServiceName $ServiceName

    # Stop each of the started VMs
    foreach ($VM in $VMs)
    {
		if ($VM.PowerState -eq "Stopped")
		{
			# The VM is already stopped, so send notice
			Write-Output ($VM.InstanceName + " is already stopped")
		}
		else
		{
			# The VM needs to be stopped
        	$StopRtn = Stop-AzureVM -Name $VM.Name -ServiceName $VM.ServiceName -Force -ErrorAction Continue

	        if ($StopRtn.OperationStatus -ne 'Succeeded')
	        {
				# The VM failed to stop, so send notice
                Write-Output ($VM.InstanceName + " failed to stop")
	        }
			else
			{
				# The VM stopped, so send notice
				Write-Output ($VM.InstanceName + " has been stopped")
			}
		}
    }


Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

PowerShell to list ASM VMs, Size and CloudService | Azure

I’ve been asked several times now to help customers running IaaS VMs in the Classic mode (ASM Mode) analyse their resources.

The following will list your classic VMs out by CloudService, VMName and VMSize to a CSV file.

You’ll need to edit the location of the file.

Please note this is not meant to be a very sophisticated script just to get you to the answer.

Happy hunting!


Add-AzureAccount
$services = Get-AzureVM | Group-Object -Property ServiceName
foreach ($service in $services)
{
   $out = @()
foreach ($VM in $service.Group)
{ 
$VMSizes = $VM.InstanceSize 

    $props = @{
        Cloudservice = $service.Name        
        VMName = $VM.Name
        VMSizes = $VMSizes
        }
    
    $out += New-Object PsObject -Property $props
}
   $out | Format-Table -AutoSize -Wrap CloudService, VMName, VMSizes
   $out | Export-Csv c:\scripts\test.csv -append
} 
 

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Creating a VM from an Azure Image | Azure

Working with Azure in the enterprise means you will quickly want to create your own custom images.  In this introductory article I will show you an example of how to create an image from an existing generalized imaged.

Please note:

  • This is utilising the ARM model and does not apply to Classic.
  • This assumes you have created a generalized image in Azure and know where it is!
  • This process is not considering on premises VMs.
  • This process uses Windows images.

The following documents and articles were used to create the script below.  Many thanks to the efforts and hard work of the authors.

Create a Virtual Machine from a User Image by Philo

Upload a Windows VM image to Azure for Resource Manager deployments by Cynthia Nottingham

Cynthia shows how to create the image and find the URL of the uploaded image.  She also gives detailed examples of the PowerShell scripts required to create the new VM.

Philo uses variables for existing networks which I found very useful and just comment out the pieces I do not need, e.g. when the vnet already exists.

Happy VM creating!


$cred = Get-Credential
$rgName = "ResourceGroupName"
$location = "Azure Location"
$pipName = "Public IP address Name"
$pip = New-AzureRmPublicIpAddress -Name $pipName -ResourceGroupName $rgName -Location $location -AllocationMethod Dynamic
$subnet1Name = "Subnet Name"
$vnetSubnetAddressPrefix = "Subnet address e.g. 10.1.0.0/24"
$vnetAddressPrefix = "vnet address e.g. 10.1.0.0/16"
$nicname = "Name of Nic"
$vnetName = "Name of vnet"
$subnetconfig = New-AzureRmVirtualNetworkSubnetConfig -Name $subnet1Name -AddressPrefix $vnetSubnetAddressPrefix
#$vnet = New-AzureRmVirtualNetwork -Name $vnetName -ResourceGroupName $rgName -Location $location -AddressPrefix $vnetAddressPrefix -Subnet $subnetconfig
$nic = New-AzureRmNetworkInterface -Name $nicname -ResourceGroupName $rgName -Location $location -SubnetId $vnet.Subnets[0].Id -PublicIpAddressId $pip.Id
$vmName = "Name of VM"
$vmConfig = New-AzureRmVMConfig -VMName $vmName -VMSize "Standard_A4"
$computerName = "Nameof Cumputer"
$vm = Set-AzureRmVMOperatingSystem -VM $vmConfig -Windows -ComputerName $computerName -Credential $cred -ProvisionVMAgent -EnableAutoUpdate
$vm = Add-AzureRmVMNetworkInterface -VM $vm -Id $nic.Id
$osDiskName = "Name of Disk"
$osDiskUri = '{0}vhds/{1}{2}.vhd' -f $storageAcc.PrimaryEndpoints.Blob.ToString(), $vmName.ToLower(), $osDiskName
$urlOfUploadedImageVhd = "URL to generaized image https://somename.blob.core.windows.net/system/Microsoft.Compute/Images/templates/name-osDisk.00aaaa-1bbb-2dd3-4efg-hijlkmn0123.vhd"
$vm = Set-AzureRmVMOSDisk -VM $vm -Name $osDiskName -VhdUri $osDiskUri -CreateOption fromImage -SourceImageUri $urlOfUploadedImageVhd -Windows
$result = New-AzureRmVM -ResourceGroupName $rgName -Location $location -VM $vm
$result

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Auto Shutdown ARM Virtual Machines in Azure | Azure Automation

With the release of the RunAs feature in Azure Automation.  A service account can now be called in Azure Automation scripts to enable and run Resource Manager functions.

I think one of the most useful features of this is to auto shutdown virtual machines by TAG.

In the example below I have set up the following:

  • A TAG called “Environment” with and ID of “Lab” and applied to each VM I want to control.
  • A RunAs service account as part of my AzureAutomation resource.
  • A PowerShell Workflow script to scan for the TAGs applied to Windows virtual machines and to shut them all down in parallel.
  • An Azure Automation schedule to run Monday-Friday at 17:00 to call the published workflow.

When configuring a schedule through the browser it uses the browser’s local time zone.  This is stored in UTC but is converted for you.   You’ll need to consider this is managing multiple resources across the globe.

This workflow and process has been created via the Azure Portal.

The -Parallel feature is only available in Powershell Workflows.

workflow TestPowershellworkflow
{
$Conn = Get-AutomationConnection -Name AzureRunAsConnection
Add-AzureRMAccount -ServicePrincipal -Tenant $Conn.TenantID -ApplicationId $Conn.ApplicationID -CertificateThumbprint $Conn.CertificateThumbprint

    $FindVMs = Find-AzureRmResource -TagName "Environment" -TagValue "Lab" | where {$_.ResourceType -like "Microsoft.Compute/virtualMachines"}

   Foreach -Parallel ($vm in $Findvms)

   {
    $vmName = $VM.Name
    $ResourceGroupName = $VM.ResourceGroupName

    Write-Output "Stopping $($vm.Name)";
    Stop-AzureRmVm -Name $vm.Name -ResourceGroupName $ResourceGroupName -Force;
    }
}

I’d be very interested in your feedback.  Happy automating.

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Moving Resources between Resource Groups | Azure

From a best practise point of view, I try to use Resource Groups as containers for all items I consider as part of the same lifecycle.  This allows me to remove, delete and recreate what I need without fear of losing a component and generally allows me to be more efficient.  I like peace of mind.

However, when it comes to best practise sometimes it feels slow and cumbersome.  “This is the Cloud I don’t want to be held back by anything” I shout from the roof tops in my superhero pyjamas!

The truth is best practise and / or procedure shouldn’t get in the way of anything at all but if I’m just spinning up a few resources to test a lab someone has sent me on a payment gateway or vNet to vNet VPN set up, this pyjama wearing superhero isn’t waiting around for anyone.   I therefore plough ahead.

Thankfully for rash people like me the Move-AzureRmResource command comes to the rescue.  https://msdn.microsoft.com/en-us/library/mt652516.aspx

For single resources I would first get the details of the resource you would like to move, this is important because you may have named two resources the same (remember what I was saying about being rash).

Get-AzureRmResource -name "resourcename" -ResourceGroupName "resourcegroupname"

Once you have the details you can then specify the ResourceType in the variable call.

Set up the variable as follows with the ResourceType, ResourceGroupName and ResourceName. Then move the resource.

$Resource = Get-AzureRmResource -ResourceType "Microsoft.Network/virtualNetworks" -ResourceGroupName "resourcegroupname" -ResourceName "resourcename"
Move-AzurermResource -ResourceId $Resource.ResourceId -DestinationResourceGroupName "newresourcegroupname"

Please note: It will move dependencies, for example if you want to move a VM it will move the components such as Public IP and Network Security Group.  As is, you will be prompted that you want to move the resource and the associated resources, if they exist.

Moving resources is simple and easy.  Best practise is important and no Cloud architect should be seen in public in their superhero pyjamas!

Azure Commander over and out……

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Public Cloud Availability

If you are in the Australian IT industry it will not have escaped your notice that AWS has had an outage in a single availability zone in Sydney.  This has created a large amount of coverage online, some outrage, some opinion and a lot of WTF moments.

The volume of coverage goes to show how successful public clouds have become.

I cannot comment on the infrastructure of any public cloud let alone AWS. Before I worked at Microsoft I certainly used AWS and am familiar with it.  But I cannot and would not like to comment on how it is built and managed, the truth is like the majority I do not know and to be honest I don’t care. Amazon is a very large, very successful company and therefore it’s easy to assume they do a lot very right indeed.

I’m sure they have fully redundant systems, like all public clouds, and probably more redundant systems than you or I could shake a stick at.  However nothing is 100% and the availability of any system is the aggregate of the provided uptime of each component.  If that aggregate does not meet your business requirements you need to architect in redundancy to meet the accepted risk level the business is prepared to tolerate.

How many of these environments were too reliant upon a single data centre?  Too many by the looks of things.

When natural disasters hit they do tend to impact a city hard.  Look at Sydney today, Brisbane a few years ago.

Your cloud deployments need to be different.  You must architect for cross region redundancy or make the business aware of the risk.  Let the business requirements drive the strategy, cloud, hybrid or multi vendor.  After all it’s what we are here for to drive successful business outcomes.

Disclaimer:  Please note although I work for Microsoft the information provided here does not represent an official Microsoft position and is provided as is.

Moving On

The year moved very quickly for me indeed.  Many blog posts sit in draft; titled but not executed.  Q3 became a blur of events, installs, upgrades and deals.

I always considered myself incredibly lucky to be in a pre-sales role and to be at Citrix. After all I have talked about being at the vendor that you have built your career on before and there I was. However, I began to become increasingly aware of my customers’ desire to move to the cloud and the exciting projects they were undertaking.   At Citrix I was a small part of these as they looked to move workloads onto a cloud platform, be that PaaS, IaaS or SaaS.

Citrix has parts to play in all of this but I was keen to spend more time focused on these transformational investments.  I was determined to learn and engage more directly.  Therefore, when the opportunity to work at Microsoft presented itself I took it with both hands.

Like everything luck, in terms of timing, played a part; Microsoft were looking to expand their team just at the point I thought I had enough experience to apply for the position.  If I’m honest I’ve not been through a more testing, challenging and enjoyable process.  (If you don’t get anything out of being grilled and challenged, then you probably will not find it enjoyable)

So where to from here?  My plan is to share my Azure journey this year on here and to walk through the tech steps I take on projects.  Please note this will be very much my own experiences and there are a number of excellent blogs, tech sites and channels from Microsoft dedicated to technical tips and tricks, troubleshooting etc. that you can access.  I will look to create a resources page that I use on my journey.

For now, over and out from XenCommder Wade and hello Commander Azure!

Flipping the Workplace

Recently I’ve been working in the education space, it’s something I recommend for everyone at least once.  The challenges are numerous, educators are innovators and it is interesting dealing with the technical demands of a user base heavily influenced by consumer trends.

It was the university sector that first introduced me to reverse learning or flipped learning.

“The flipped classroom describes a reversal of traditional teaching where students gain first exposure to new material outside of class, usually via reading or lecture videos, and then class time is used to do the harder work of assimilating that knowledge through strategies such as problem-solving, discussion or debates. (Vanderbilt University, Center for Teaching).”

Further reading – Flipping the classroom – The Economist

With the ability to access information from almost anywhere people are seeking to plough through the noise before they enter the office. In exactly the same way a flipped classroom has students viewing the information before they come together, a flipped workplace lets staff digest the knowledge they need before they meet to work on an outcome.

Martin Dursma, VP Citrix Labs and CTO Office Chair delivered a presentation at Citrix Synergy titled Taking One Step Beyond with other Citrix CTOs he covered some interesting long term technology trends.

Many mirror what is happening in education; organisations trying to mobilse their workforce to slow down the need for additional office space, the medical sector responding to instant information availability and device trends and engineering and construction firms taking information out to the field for instant feedback and collaboration.  Each of these examples is changing the demand on the traditional workspace and enabling people to flip the way they work.

IT professionals and departments need to be enablers and leaders of this approach to working and not the blockers or luddites.

Maybe its time to go back to class?

Are Mirco Apps Changing Everything?

I come from a desktop / application background, I’ve spent most of my IT career working with one flavour or another of Citrix’s XenApp platform (I’ve been through all the name changes) and am currently employed by Citrix as a Systems Engineer in Australia.  To that end I’ve spent a lot of time working with apps; in the early days that meant hacking them as best I could to squeeze them onto a multi-user platform.  This kept me extremely busy however in reality was always a marginal activity, in that most applications were being installed onto desktops.  I saw a major change with my involvement in the launch of an “e” project in 1999; “e” stood for electronic and the company I was working with started investing heavily in creating web front ends to all applications.  The end result was that vast numbers of apps were rolled out onto MetaFrame because users wanted the full application and the business wanted to centralise. (I say business but it was in fact the IT division’s idea).  The end result for me was that I saw the business demanding fully functional applications and the impact of that project has stuck with me ever since.

I often sit with customers today and one statement I hear myself saying again and again is “do you have a desktop or application problem?” The point I am trying to highlight is that maybe we can solve the issue they have raised by pulling out the applications that require attention.  And not look at reworking their desktop strategy. After all it’s the apps that are important?

Recently I found myself again pondering this scenario and so I tweeted “It’s all about the apps, always had been always will be” and this generated a number of responses.  The first that came back was from @bramwolfs “I think it’s all about the data not specifically the apps..” Which immediately had me thinking there was little way out of this; data exists and is manipulated by apps, my focus has always been on the apps and that is where I make my money so that is where I placed my bet.  @KBaggerman highlighted a blog post titled “VDI OK What’s Next” by Stephane Thirion a Citrix CTP (@archynet) talking about desktops versus applications and applications versus data.  He makes some interesting points about the relationships between data and applications and the importance of data.  I can only agree however I would add that as some apps sole purpose is to collect and create data it is hard to define and almost irrelevant to consider which came first or which is more important, both are a requirement.

More interestingly he talks about user habits and the requirement of a desktop operating system, he also talks about the rise of mobile apps or micro apps; i.e.  apps created for a single purpose that do not require interaction or workflows with other applications and therefore do not require a desktop operating system.

This to me is an interesting area of development and I believe we are seeing two forces at play; the rise of SaaS and its adoption and the influence of the iPad and tablet.  Firstly SaaS is entering every workplace, I was recently  hosting a CIO round table discussion and every CIO was focused on SaaS and in fact the most interesting comment was “every app I deliver I now have to compete against a SaaS app, that is the way I have to think.” And you know what I think he was right; if you enforce a monolithic set of apps onto a workforce and it is not meeting the needs of a business unit then you can bet within days that business unit will be hunting out an alternative and swiping their credit card when they find something they like. Secondly the iPad factor, all apps on the iPad have single functions, I book my travel, check my email, look at website and knock over blocks with very upset birds.  Each app performs well and every day I use them I am breaking the habit of having to work within an operating system.  And therefore every day that operating system becomes less relevant to me.

Can we drop the operating system, no there are too many applications built for that platform. Is the desktop operating system becoming less relevant, yes however this has to be taken into context, just take a look at how many Windows 7 licenses have been sold since release.  But I do think that the mico app aided by the choice and availability on offer from SaaS vendors is accelerating change.

UPDATE  If you want  to read some interesting points head back to  “VDI OK What’s Next” by Stephane Thirion and join in or have a read.

Move out the way IT….

If you are like me you have spent your life in IT.  I started in desktop support, spent some time on a help desk, and then moved back into level 2 desktop support (I can’t remember the difference between level 1 and level 2 desktop support, I still spent a lot of time crawling about and plugging cables in). From there I took my fist sip at the infrastructure cup and spent time in a server team, racking and stacking.  I then went back into the help desk world and managed a team before moving into a data centre and spending my life completing server builds, working on automation projects, data centre management and builds, networking and design.

It’s a long list and I’m sure yours is similar but why bother with it at all?  Well throughout this time I saw ASP to and fro, I built a lot of web servers for application web “e-projects”, I saw outsources come and go,  services were proposed on site and off site and  I was convinced we had the data centre of the future.  Yes during this time things were changing, processes were becoming slicker, application developers were becoming more sophisticated (I should clarify – Not in the sense that they stated wearing shoes to the office – just their programming skills) and capacity, in what we could do and what the programme, hardware or network would let us do was growing fast.  However through all that I was sure we had it licked Moore’s Law was in operation and we were taking full advantage of it.

In fact I still look back at those times and think I did good work and the teams I was part of helped the businesses we worked for grow, be more productive and transform.  And that for me this is the crux of the point I am trying to make, businesses do transform, it happens all the time.  Not all get caught up in the Innovators Dilemma, a GFC crash or a failed investment.  A lot grow fast; spin off in random directions and pioneer new industries.  However to do this today in an age where information moves fast, in fact where information is instant requires an agility a traditional environment cannot provide.

For a very simple example let’s take two IT executives that are required to roll out a new CRM system; the assumption is that their business has identified this need to help them reach new markets.  Meetings are held due diligence is completed and a project team is set up to identify the business requirements. Once identified team one brings in a global software provider, partners with a large technology provider and starts the process of developing the application and planning the roll out.  Team two picks a SaaS vendor and starts.  In this instance it is assumed that the SaaS vendor has a product that meets the business requirements identified during phase one of the project.  Is it that simple; well yes it is.  Both teams have completed the correct amount of due diligence to identify the business requirements and select an application  vendor, however team two does not have to build and roll out any infrastructure or mange the application, it just happens.  This means team two is delivering on the business outcome much faster than team one, enabling a competitive edge.

Okay now I can hear the outcries because you are an IT guy like me with an IT history like mine.  What about security, what about data protection what about lock in?  Well you know what, the business outcome has been met, the business is productive, competitive and happy.

I say job done and I say move out the way IT.

UPDATE 4/4/2012

Brett Winterford  of itnews  published an interesting article titled  Death of the SysAdmin with the following video clip of Fortescue Metals CIO Vito Forte, delivered during his keynote presentation at iTnews’ Executive Summit in Melbourne.  I think it summarise this article well.