Read the other posts in this series:
Why Create an AX Build Server/Process?
Part 1 - Intro
Part 2 - Build
Part 3 - Flowback
Part 4 - Promotion
Part 5 - Putting it all together
Part 6 - Optimizations
Part 7 - Upgrades

In our implementation, we use TeamCity to manage each of the individual processes. I’ve discussed how the system not only makes it easier to migrate code through a development process, but also how to update the systems earlier in the cycle (like Dev) to reflect Production without any manual intervention. Both of these tasks benefit by having the entire process pre-scripted, allowing for the smallest amount of downtime so users and developers alike can get back to work as quickly as possible.

This post is now going to bring all the individual pieces together, to make a fully unified Lifecycle Management system. Linking all the processes together is where TeamCity really shines. With Build Triggers and Artifact Dependencies, we can automate the entire process so users don’t have to do anything - the system just does it.

First, let’s go over one new process I have not yet discussed: the data update.

Data Update

Some of our environments (most notably Staging) have a requirement to be updated to match production on a daily basis. The remaining environments do not have to be in a production state every day, but when they are updated they should reflect the closest state to production as possible. This requires the production database to be backed up nightly. To help with this, we created a new build configuration. We did not put it into one of the other configurations because it should be an independent process.

The data update configuration only has one step: Backing up the production database. This is a simple PowerShell script:

BackupDatabase.ps1
1
2
3
4
5
6
7
8
$rootSourceLocation = "\\[Backup Network location]\"
$dbBackupFileName = "DBBackup.bak"
$serverName = "[AX SQL Server Name]"

$dbBackupFile = $rootSourceLocation + $dbBackupFileName

$query = "BACKUP DATABASE [DynamicsAx1] TO DISK = N'" + $dbBackupFile + "' WITH INIT, NOUNLOAD, NAME = N'DynamicsAx1 Clone Backup', NOSKIP, STATS = 10, NOFORMAT"
sqlcmd -E -S $serverName -d master -Q $query

That’s all there is to it.

Now that we have all the processes defined, let’s examine how we want each of the pieces to execute. To help me visualize what should be happening when, I created this flowchart:

Each box represents one of the processes we have discussed previously. The round spots represent when each process line should begin. It should be noted that we also allow each individual task to be run manually. If anything is started manually, the next process in line will still fire. For example, if I trigger a Flowback: Dev manually, Flowback: Beta will automatically run when it has completed. Likewise, if I run a Promotion: Staging manually, it will be followed by a Build Process and Promotion: UAT.

The numbers in each box represent the total time to run the process (on average). This helps to determine the trigger times by working backwards from when we want the process to complete.

As you can see, we want there to be a data update, staging promotion, and build process every night, and we want a UAT Promotion to occur after every Build process (so we can begin testing the changes immediately). Dev and Beta are both manually triggered by developers, but when we update Dev we want to make sure Beta is updated with it so they have the same base code.

 
Now that we have an idea of how each of the individual tasks relate to each other, we can begin scheduling the tasks in TeamCity, using the triggers section of each build configuration:

Process Trigger Type Description
Flowback: Dev None
Flowback: Beta Finish Build Trigger Wait for successful build in AX Lifecycle: AX Flowback: Dev
AX Data Update Schedule Trigger Cron command: 0 0 00 1,3-7 * (Server Time Zone)
Promotion: Staging Finish Build Trigger Wait for successful build in AX Lifecycle: AX Data Update
AX Build Process Finish Build Trigger Wait for successful build in AX Lifecycle: AX Promotion: Staging
Promotion: UAT Finish Build Trigger Wait for successful build in AX Lifecycle: AX Build Process
Promotion: Production Schedule Trigger Weekly on Sunday at 22:30 (Server Time Zone)
Flowback: Build Finish Build Trigger Wait for successful build in AX Lifecycle: AX Promotion: Production
Promotion: Staging Finish Build Trigger Wait for successful build in AX Lifecycle: AX Flowback: Build

 

The Schedule Triggers mean the process starts at specific times. Because we want to execute the Data Update process 6 out of the 7 days per week, instead of creating one trigger for each day, we just use a simple Cron statement to execute on certain days (in this case, Tuesday through Sunday). It should also be noted that the Data Update starts running at midnight, compared to the Production Promotion, which starts at 11:30pm, so the Data Update schedule must be offset by one day or multiple processes with overlap each other.

The Finish Build Triggers wait for the particular event (in this case, a successful execution of the previous process), and then add themselves to the queue. If you have two processes with the same Finish Build Trigger, it’s more or less random which one will start first, but because our process is linear in nature, we don’t have to worry about that.

One of the nice side-effects about setting up triggers this way is they only run on successful completion. If for some reason the Promotion: Staging fails, nothing after it runs. Similarly, if a Build Process itself fails, UAT won’t be updated with the failed code. We still need to address the failure, of course, but by stopping the process line prematurely, no unnecessary work is done.

I should also note that the Production Promotion process includes a step that is identical to the data update. This is because after a production promotion, we want the Build server to update with the new code and data. However, we only want it to update after a production promotion. If we attempted to chain the Data Update after production promotion, and the build flowback after that, Build would be updated every night, which is not a good thing when we try to accumulate changes over the course of a week. This way, we can make sure Build is only updated once a week, and the data is still updated nightly.

 

Now that everything is scheduled, all that is left is for the actual developer work to finish. This process was established to help optimize the development process we have established internally:

  1. Request comes in, any missing information gathered by Project Manager
  2. Work assigned to developer by Project Manager
  3. Developer makes changes in Dev
  4. Developer loads changes in Beta, reviews with User for functionality testing
  5. User approves functionality testing
  6. Developer loads changes (with version control information) into Build
  7. Developer triggers Build Process. Build + Automatic push to UAT
  8. Project Manager reviews changes with user in UAT
  9. Project Manager accepts changes
  10. Approved changes pushed to Staging nightly
  11. Approved changes pushed to Production weekly

There are exceptions to some of these steps (for example, Beta and the associated user review is normally reserved for large projects with many working parts; in some cases, the user may not even be involved until UAT), but for the most part this is our workflow.

Some of the nice benefits we’ve enjoyed since implementing this include:

  • Increased SOX compliance: the developer cycle (Dev -> Beta -> Build) is independent of the production cycle (Build -> UAT -> Staging -> Production).
  • The code deployment processes are all well-defined and automated, so there is no risk “forgetting” to do a step, like copy version control information.
  • All changes are traceable: If you have a production build number, you can see what the build process number is, and all the changes related to that build. There is also built-in auditing features that state who triggered a specific process, and when.
  • If something goes wrong and you need to rollback to a previous version, it’s as easy as triggering a custom Production Promotion process.

I do hope the information in this series helps others to see the value in implementing a Lifecycle Management process for AX 2009, and gives some ideas on how it can be accomplished relatively quickly and painlessly.

Comment and share

Read the other posts in this series:
Why Create an AX Build Server/Process?
Part 1 - Intro
Part 2 - Build
Part 3 - Flowback
Part 4 - Promotion
Part 5 - Putting it all together
Part 6 - Optimizations
Part 7 - Upgrades

In this installment of the Automated Builds and Code Deployment series, I’m going to cover what is probably the most important component of the build process: Promotion.

The Promotion process should be the only way new code leaves one environment and enters another. Our promotion cycle is fairly straightforward: Dev => Beta => Build => UAT => Staging => Production. Projects should hit most, if not all, of these environments and must go through them in that order. We have found that Beta is really the only environment that can be skipped, but should only be skipped for very minor changes (for example, adding a single field from an existing data source on a report).

Again, we are using the Template feature of TeamCity 8.0+ to help manage our configurations. Similar to our Flowback processes, we have a template definition of a variable, Working Directory, which needs to be defined in the implementation of each build configuration.

Our non-production promotion process consists of 7 steps:

  1. Shut down AX Server Process
  2. Copy Build files
  3. Remove AOS temp/cache files
  4. Update AX database
  5. Copy Version Control attributes
  6. Start AX Server Process
  7. Synchronize Data Dictionary

The production promotion process is very similar, with 7 steps, but with some slight changes:

  1. Shut down AX Server Process
  2. Copy Build files
  3. Remove AOS temp/cache files
  4. Copy Version Control attributes
  5. Start AX Server Process
  6. Synchronize Data Dictionary
  7. Backup Production Database

As you can see, the biggest difference is the Production Promotion does not update the database (for obvious reasons), but instead backs it up. I’ll go into more details in my next post, which will bring everything together and outline how each of the pieces interact with each other as a total Lifecycle Management system.

Process Configuration

Each individual process has an Artifact Dependency on the AX Build Process. The process itself should define the criteria for which build it should take. For example:

  • UAT Promotion should take the last successful build
  • Staging and Production should take the last pinned build

During execution, TeamCity will automatically lookup the last build that meets the criteria and download the artifacts that were saved in that build.
Additionally, we have the artifact paths set to

1
BuildFiles.zip!*.*=>AX Build Files

This means the build agent that is running this will automatically extract all the files it finds in the BuildFiles zip file (which is created during the build process) and extract them to a folder named AX Build Files. This path will be referenced in future scripts so we can move the files where they need to go.

Stop AX Server Process

Because we will be manipulating the server binaries, our first step is to shut down the AX server (or servers). Originally, we used a batch script for this step. However, because we cannot specify the timeout, we would sometimes run into issues where the service did not finish shutting down or starting up before the rest of the process occurred. So instead we are using a simple PowerShell script:

StopAxServers.ps1
1
2
3
stop-service -inputobject $(get-service -ComputerName "[AOS Server 2]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue
stop-service -inputobject $(get-service -ComputerName "[AOS Server 1]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue
stop-service -inputobject $(get-service -ComputerName "[AX Load Balancer]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue

As you can see, we are stopping each process sequentially and in reverse order. In reality you can stop the processes in any order. Also, because we are using PowerShell stop-service, the script will naturally wait until the service has finished stopping before moving to the next line. If something causes the AOS Server 2 to not stop at all, AX will still be available because Server 1 and the Load Balancer are still up. The -WarningAction flags will prevent the warning messages (“WARNING: Waiting for service ‘[Service name]’ to finish stopping…”) from showing the TeamCity logs.

Copy Build Files

As mentioned before, the files from the build are automatically extracted to a folder that we can reference. We cannot just extract them to the AX Server folder because the extraction process occurs before the first defined step, meaning the files will be in use. Instead, we will just copy them there now that the server is offline:

CopyBuildFiles.bat
1
2
3
4
5
6
7
8
9
10
11
@echo off
REM Resources
set fileSource="..\..\AX Build Files\*.*"
set fileDestin=\\[server name]\DynamicsAx1\
REM /Resources

xcopy %fileSource% %fileDestin% /Y /z
IF ERRORLEVEL 1 ECHO ##teamcity[message text='No files to copy' status='ERROR']
IF ERRORLEVEL 2 ECHO ##teamcity[message text='File copy terminated prematurely' status='ERROR']
IF ERRORLEVEL 4 ECHO ##teamcity[message text='Initialization Error' status='ERROR']
IF ERRORLEVEL 5 ECHO ##teamcity[message text='Disk write error' status='ERROR']

The AX Build Files folder will be in the root build directory, which is two levels up from where the script resides. Additionally, we have the server files shared across the network to the build server, which allows us to update all the files remotely. There is also some generic error handling at the bottom, since xcopy won’t actively throw any error codes if there was an issue.

Remove AOS temp/cache files

This step is another simple script which removes the now old temp and cache files the AX server uses to help make things run faster. If they aren’t removed, the server may continue to use the old code, which could cause issues for uses. These files will be re-built with the new codebase once the first AX server starts up.

RemoveTempFiles.bat
1
2
3
4
5
6
7
8
@echo off
REM Resources
set fileLoc=\\[server name]\DynamicsAx1
REM /Resources

del "%fileLoc%\*.alc"
del "%fileLoc%\*.ali"
del "%fileLoc%\*.aoi"

As you can see, I’m only targeting some of the temp/cache files:
ALC = Application Label Cache files
ALI = Application Label Index files
AOI = Application Object Index files

You can additionally remove more files if you like, but only keep to those ending with the letter C or I. You can find more details on what each file extension means at http://www.artofcreation.be/2009/10/27/application-file-extensions/.

Update AX Database

This is only for non-production promotions, and is very similar to the Database Update step of the Flowback processes. We restore the current backup of production into the specific server’s database, and run a SQL update script that points necessary system values to the correct values for the environment.

Copy Version Control Attributes

This is probably one of the more tricky scripts. Because we use the built-in AX MorphX Version Control system, and this information is only entered with the code back in Build, we need a way of bringing the information forward to each system. We use a PowerShell script to manage this process.

Additionally, we have a modification in our system which tells us which the internal build number that the system is running, and when it was originally created/approved:

This information is stored in the database on the SysEnvironment table, and since it directly relates to the version information, we update it during this process. All the information comes directly from TeamCity using the REST API. Additionally, each database server is linked with the previous database server in line (IE, UAT has a link to Build, Staging has a link to UAT, and Production has a link to Staging).

In this case, the script takes a number, which represents the build ID (not to be confused with the build number). This is passed into the script from the TeamCity configuration.

CopyVersionControlInfo.ps1
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
param([Int32]$buildId)

Write-Host "##teamcity[message text='Loading build information for $buildId' status='NORMAL']"

#Load XML assemblies
[Reflection.Assembly]::LoadWithPartialName("System.Xml.Linq") | Out-Null
[Reflection.Assembly]::LoadWithPartialName("System.Linq") | Out-Null
Add-PSSnapin SqlServerCmdletSnapin100 -ErrorAction SilentlyContinue | Out-Null
Add-PSSnapin SqlServerProviderSnapin100 -ErrorAction SilentlyContinue | Out-Null
#/assemblies

#Local vars
$sourceSqlServer = "[Source DB Server]"
$sourceSqlName = "[Source DB Name]"
$destinationSqlServer = "[Destination DB Server]"
$destinationSqlName = "[Destination DB Name]"
#/Local vars

$buildInfo = [System.Xml.Linq.XDocument]::Load("http://[TeamCity Build Server Root URL]/guestAuth/app/rest/builds/id:$buildId");
$buildNum = $buildInfo.Root.Attribute("number").Value;
$buildDate = [DateTime]::ParseExact(($buildInfo.Root.Descendants("finishDate") | %{$_.Value}), "yyyyMMddTHHmmsszzz", [System.Globalization.CultureInfo]::InvariantCulture).ToUniversalTime().ToString("s");
$apprvDate = [DateTime]::ParseExact(($buildInfo.Root.Descendants("timestamp") | %{$_.Value}), "yyyyMMddTHHmmsszzz", [System.Globalization.CultureInfo]::InvariantCulture).ToUniversalTime().ToString("s");


#Update Build information in the environment
$query = "UPDATE SysEnvironment SET BUILDNO = $buildNum, BUILDDATE = '$buildDate', APPROVEDDATE = '$apprvDate'"
Invoke-Sqlcmd -ServerInstance $destinationSqlServer -Database "DynamicsAx1" -Query $query

#Pass along Version Control Items table
$query = "INSERT INTO [DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXITE2541]
SELECT DISTINCT src.*
FROM [$sourceSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXITE2541] src
LEFT JOIN [$destinationSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXITE2541] dest
ON src.RECID = dest.RECID
LEFT OUTER JOIN [$sourceSqlServer].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXREV2543] rev
on rev.ITEMPATH = src.ITEMPATH
WHERE dest.RECID IS NULL and rev.CREATEDDATETIME < '$buildDate'"

Invoke-Sqlcmd -ServerInstance $destinationSqlServer -Database "DynamicsAx1" -Query $query

#Pass along Version Control Lock table
$query = "INSERT INTO [DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXLOC2542]
SELECT src.*
FROM [$sourceSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXLOC2542] src
LEFT JOIN [$destinationSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXLOC2542] dest
ON src.RECID = dest.RECID
WHERE dest.RECID IS NULL and src.CREATEDDATETIME < '$buildDate'"

Invoke-Sqlcmd -ServerInstance $destinationSqlServer -Database "DynamicsAx1" -Query $query

#Pass along Version Control Revision table
$query = "INSERT INTO [DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXREV2543]
SELECT src.*
FROM [$sourceSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXREV2543] src
LEFT JOIN [$destinationSqlName].[DynamicsAx1].[dbo].[SYSVERSIONCONTROLMORPHXREV2543] dest
ON src.RECID = dest.RECID
WHERE dest.RECID IS NULL and src.CREATEDDATETIME < '$buildDate'"

Invoke-Sqlcmd -ServerInstance $destinationSqlServer -Database "DynamicsAx1" -Query $query

#Update RecID sequences for above tables
foreach ($i in (2541, 2542, 2543))
{
$query = "UPDATE [DynamicsAx1].[dbo].[SYSTEMSEQUENCES]
SET NEXTVAL = (SELECT NEXTVAL FROM [$sourceSqlName].[DynamicsAx1].[dbo].[SYSTEMSEQUENCES] src
WHERE src.TABID = $i)
WHERE TABID = $i"

Invoke-Sqlcmd -ServerInstance $destinationSqlServer -Database "DynamicsAx1" -Query $query
}

Each of the queries is run on the destination SQL server, so the information is ‘pulled’ forward. Additionally, it will only take version notes that were created before the current build. This allows multiple builds to be in the system, without the version information being passed upstream.

The biggest risk with this setup is if you need to roll back Build before a production promotion occurs. If you do not load the same elements in the same order, you run the risk of the RecID on the version control tables getting out of sync.

Start AX Server Process

Now that all the database maintenance has been completed, we start up the AX processes again:

StartAxServers.ps1
1
2
3
start-service -inputobject $(get-service -ComputerName "[AX Load Balancer]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue
start-service -inputobject $(get-service -ComputerName "[AOS Server 1]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue
start-service -inputobject $(get-service -ComputerName "[AOS Server 2]" -DisplayName "Dynamics AX Object Server 5.0$*") -WarningAction SilentlyContinue

Again, this is a PowerShell script, so we can take advantage of the indefinite wait while each process starts up. You may also notice that we start the processes in the reverse order we shut them down. While this is not necessary for everyone, it is something that should be kept in mind.
Our license allows for 2 concurrent AOS servers, and technically an unlimited number of load balancers (since they do not consume a server license). However, when starting up the load balancer, the process is not yet aware that it is a dedicated load balancer, and consumes a license. During a normal startup this license would normally be released in a few seconds. However, since we deleted the cache/index files earlier, and it is the first server process to start, it will rebuild all those files prior to releasing the license.
The end result of all this is that if we do not wait for the load balancer to finish starting up, the second production server (the third in the list) will not start at all.

Also, this step has a slight configuration change compared to the previous steps. All the previous steps are configured to only execute if the build status is successful. This means that if any step fails, the subsequent steps will not run - a helpful feature, especially when the server processes fails to stop. However, this step is configured as “Always, even if a build stop command was issued.” This allows the servers to always come back online, even if the promotion was a failure or manually stopped.

Synchronize Data Dictionary

This step ensures that the database schema matches what the code says it should. Since the database was restored earlier (for non-production environments), this applies any database changes that have been introduced since the last production promotion.

Backup Production Database (Production only)

This step is only in the Production Promotion configuration. I will explain more about why this step is here in my next post, but for the time being the step is relatively simple: backup the production database to a network location. The backup is then used in all the other processes to restore to a production-like status.

And that’s how we run our promotions. Manually promoting is just a button click away, and the total system downtime is minimized. For non-production promotions, the average time to run (from shutdown to completion of the database sync) is about 40 minutes. For the production promotion, average system downtime is about 30 minutes. The total time to run (including the database backup) is about 2 hours. This gives us a lot of flexibility in scheduling our updates with minimal downtime.

Comment and share

  • page 1 of 1

James McCollum

author.bio


author.job