New features in dbops 0.4.3

A new version of dbops (0.4.3) was released, which introduced a few brand-spanking-new configuration and QoL features:

  • Reset-DBODefaultSetting – resets all custom configurations made by a user to their original values defined by the module.
  • Export-DBOConfig – you can now export configurations from packages and configuration objects into a json file and use it later for deployment.
  • Send-DBOMailMessage – sends an informative email after the deployment is done. Supports custom email templates and configurations.
  • New-DBOConfig – creates a new configuration object with current defaults. Previously, this functionality was covered by Get-DBOConfig, which was slightly confusing.

Some examples on how to use all the new features:

Hopefully, these changes will help you to fine-tune your deployments better.

Happy Halloween!

Advertisements

Introducing dbops Powershell module – DevOps for databases

DevOps for databases

It’s been a long while since DevOps has become a hot topic, yet every time we talk about DevOps for databases, every solution tend to force user into some kind of framework, which usually has a set of rules binding them to a certain way of writing the scripts, deploying the code, and even following a specific naming convention. All of this only exists to control the flow of the deployment internally, but makes it more of a black box to the end-users: us.

Ever since I tried to deploy a set of scripts using infamous Invoke-SqlCmd (that was a terrible idea, by the way), I felt there was a gap in community-driven Powershell world, a SQL code deployment module. A module, that would be simple to use, enables users to implement Continuous Deployment and Continuous Integration scenarios, and will be absolutely transparent in the ways it works.

Eventually, in my searches, I stumbled upon the DbUp project: a pure .Net cross-platform deployment solution, which, however, required a fair amount of coding prior to starting using it. And more often than not, only .Net developers would risk venturing into the complications of the internal classes, building a deployment solution as a part of their application. The idea behind the project, however, was perfect: a fully customizable deployment library that can be wrapped into any kind of code and be used the way you like it. After some testing, I realized that it was exactly what I needed: an open-source engine, on which I can build a 100% customizable Powershell SQL code deployment module.

Continuous Integration and Continuous Deployment

DbUp performs deployments using so called migrations: sequential transformations of the database schema, that eventually leads to a fully deployed state. The scripts are supposed to be in a single folder and be properly named to maintain the deployment order. To maintain the deployment integrity, DbUp engine uses the database versioning approach, where each script is supposed to be stored inside the schema versions table, which tracks the progress of the deployment and prevents the migrations from being executed twice.

But what if people are re-using same scripts when deploying the database? Or the development process already has a defined naming convention and consists of multiple similarly named scripts and/or folders? I wanted the module to be able to deploy literally any kind of solution, as long as the scripts are simple plain-text SQL files. And not only deploy them, but also enable CI/CD operations for that code!

With that in mind, I came up with an idea of Builds, mimicking the application deployment pipeline: the code is going through a build phase first, where it’s being packed into a deployment package, ensuring the same deployment experience across the enterprise. To enable the databases to follow that same path, the module is using Builds that represent a set of scripts that bring the database to a specific database state after a migration.

DbOps package
dbops builds deployed onto databases with different states

Each build consists of one or more scripts deployed in a predefined order; once all the scripts from the first build are deployed, the next-in-line build will be picked up for the deployment. Each script is recorded to a schema versions table inside the deployment database ensuring that the same script would never be executed twice.

Such approach opens doors to CI/CD scenarios, where any following version of the package can be deployed on top of any database state, even though it’s just good old plain-text SQL code behind the scenes. It also enables users to use any file names and any deployment order as much as they need.

Since all previous SQL files are stored inside the package, it is quite easy to use that catalog to add new SQL files to it, as there is now way to distinguish new files in the source code folder, which enables users to leverage CI/CD features. The only catch here is that now the package would have to be stored somewhere so that it could be easily retrieved during next build phase.

 

ci-cd-flow2
dbops flow

DBOps module

The module takes care of all the aspects of the deployment, as well as grants full control over the deployment process. The code is distributed under MIT license and was recently made available on Github:  https://github.com/sqlcollaborative/dbops

To install this module run the following statement to get it directly from PSGallery:

Install-Module dbops

As of the date of this post, the module supports two major RDBMS: SQL Server and Oracle, but since DbUp supported platforms list is huge, it’s only a matter of time (and demand!) until all the other platforms are supported.

Basic help is available on the front page, however, the documentation is still in a work-in-progress state. Same goes for the module itself: as much as its features are now more prominent, its development is far from over and there is a whole set of features planned for future releases, such as: rollbacks, tSQLt integration, sqlcmd/sqlplus scripts parsing, etc.

Command list

Deployment

  • Install-DBOSqlScript – script deployments from a set of files/folders
  • Install-DBOPackage – deployments of the previously built packages

Package management

  • New-DBOPackage – creates a new package from an existing set of scripts
  • Get-DBOPackage – returns the package object with a multitude of fine-tuning scenarios
  • Add-DBOBuild – adds a new build to the existing package
  • Remove-DBOBuild – removes a build from the existing package

Configuration management

  • Get-DBOConfig – loads configuration from a package or a configuration file
  • Update-DBOConfig – updates configuration of the package
  • Get-DBODefaultSetting – get a list of default deployment settings
  • Set-DBODefaultSetting – modify default deployment settings

CI/CD pipelining capabilities

  • Invoke-DBOPackageCI – integrate recent changes in the code into the package using incremental versioning

Artifact management

  • Copy-DBOPackageArtifact – copy a specified version of the package from the repository
  • Get-DBOPackageArtifact – retrieve a path to the specified version of the package from the repository
  • Publish-DBOPackageArtifact – publish package into the repository

Cross-platform support

  • Install-DBOSupportLibrary – download and install 3-rd party libraries for other database types, if needed

Thank you for support

Open-source projects are fueled by many people working towards the same goal. I really appreciate the support of the #dbatools team who made this project possible. Special thanks to:

  • @mobileck – for testing and on-point feedback
  • @cl – for dbatools groove, mentoring and bright ideas
  • @sqldbawithbeard – for Test-Driven Development mentality
  • @FredWeinmann – for showing me the ways of PSFramework thus improving multiple aspects of this module

 

Using dacpac packages to clone SQL Server databases

Many SQL Server DBAs know that there is a very convenient way of delivering a full copy of the database structure to a different SQL Server: it’s called dacpac, or a Data-tier Application package. You can extract the database any time using SQL Server Management Studio context menu: Tasks -> Extract Data-tier Application, which would guide you through the extraction options and output a compressed package file that would contain all necessary schema definitions, as well as table data. This package can be further on deployed on a completely different server using a Deploy Data-tier Application context menu item.

What people do not widely know, however, is how much fine-tuning a simple deployment might require, and how difficult it is sometimes to force the deployment operation to behave in a certain way. Good thing, is that there is now a convenient way of automating this process using dacpac-related functions from the dbatools Powershell module.

This article would explain how to:

  • Extract dacpac from a database
  • Choose whether you want to include data or not
  • Deploy the dacpac package on any version of SQL Server higher than 2008 SP2
  • Exclude object types from the deployment process
  • Generate deployment scripts
  • Create a Jenkins job that will take care of this whole process for you

Extracting dacpac package

Let’s use this simple script to extract dacpac from an existing database:

Export properties here are defined to control the extract process:

  • Ignore database permissions, as there is rarely a need to add permissions into the mix
  • Define an option to include data into the package

Export-DbaDacpac function will take care of the extraction process and will return all details about the extraction, including the .Path property that would direct us to the extracted file:

2018-08-24_15-01-10
output from Export-DbaDacpac

You can check all other available properties (which are basically command line parameters) in a corresponding article: https://msdn.microsoft.com/en-us/library/hh550080.aspx

Deploying dacpac package

There is one important note about dacpac deployment: it requires a so called Publish Profile for the deployment. Publish profile is an xml file that contains deployment parameters that you want to enable during the deployment. You can create such profile by using one of the following options:

  • Run New-DbaPublishProfile, specifying the connection parameters. The resulting file will enable you to run the deployment, but it is highly recommended that you modify it according to your needs
  • From Visual Studio SSDT project, select Project -> Publish. It would open a dialog, that allows you to load, save and modify the Publish profile.
  • Manually, using documentation from Publish parameters section of the same article

The parameters that we’re going to use during deployment are as follows:

  • AllowIncompatiblePlatform – would enable me to deploy to an earlier version of SQL Server.
  • CreateNewDatabase – will re-create the database every time
  • ExcludeObjectTypes – semicolon-separated list of objects that will be excluded from the deployment.
  • ExcludeLogins and ExcludeUsers – somehow, these two ended up being independent parameters, not included in the previously mentioned ExcludeObjectTypes. Excludes logins and/or users from the deployment.
  • IgnorePermissions – ignores object permissions during the deployment.
  • DropObjectsNotInSource – boolean value, which is True by default, that controls whether existing objects in the database are going to be dropped if they are not present in the package. This is a very important parameter if you are planning to deploy dacpac to a non-empty database.
  • IgnoreRoleMembership – ignores role membership

This is an example of the Publish profile file that we’re going to use in this exercise:

Once we have a Publish profile ready we can start the deployment by using Publish-DbaDacpac function, specifying:

  • SqlInstance – target server
  • Database – target database
  • Path – path to the dacpac package
  • PublishXml – path to the publish profile
2018-08-24_15-44-08.jpg
just like this

Other parameters of Publish-DbaDacpac that you might find useful:

  • GenerateDeploymentScript – will generate a deployment script prior to deployment
  • ScriptOnly – will not perform the deployment, generating a deployment script instead

Full script

This script below would extract the dacpac from an existing database and deploy it to a different server using the publish.xml file we created above. Feel free to modify it to your own liking!

Creating a Jenkins job

In order to create an interface and a scheduler at the same time, I would definitely recommend to make this scenario available on your Jenkins instance, same way we did it before in Refresh database & keep permissions – SQL Server Jenkins labs (Part 2). The parameters are going to be almost the same, so feel free to use examples from the Jenkins labs to create the job that will use instead a dacpac deployment script.

2018-08-24_15-55-30
sample Jenkins job

Conclusion

Dacpac packages are a convenient way of copying a full database when other methods are unavailable for whatever reason. It has some learning curve associated with it, but it can be a great help when used in an automated manner.

It is also a perfect workaround for database copying, when a source server has a higher SQL Server version than the target server: this will work even between 2017 and 2008, if all the objects are compatible with the target database version.

It also work pretty well as an alternative for a snapshot replication, being much less restrictive in its approach.

UPDATE: Updated the CreateNewDatabase parameter with a new description, apparently it recreates the database every time instead of just creating a new database if needed.

ZipHelper Powershell module

I’ve recently published ziphelper Powershell module on PSGallery, due to my permanent frustration about the existing Expand-Archive/Compress-Archive commands in Powershell 5.1. No, they work fine, but when there’s a need to work with one single item out of the whole archive, there is no other option except to extract/compress everything at once.

This new module works on a per-item basis, introducing 4 new commands:

  • Add-ArchiveItem – adds one or more items to the archive with an option to specify an InnerFolder within the archive where the files will be stored.
  • Expand-ArchiveItem – extracts one or more archived files from the archive.
  • Remove-ArchiveItem – removes one or more items from the archive.
  • Get-ArchiveItem – returns a list of details about items in the archive. When returning a specific item, will also return its contents as binary.

The module is using native Zip streams and comes without any 3-rd party libraries. Feel free to download it from PSGallery:

Install-Module ziphelper

Refresh database & keep permissions – SQL Server Jenkins labs (Part 2)

Welcome back to the SQL Server Jenkins labs! In the first part of the labs we created a Jenkins job that restored a database based on historical data. This time, let’s try something very similar, but add some flavor to it.

We’re going to setup a job that copies a single database in its current state from one server to another. It is very similar to the previous lab exercise, except, this one will take care of much more things at once. Let’s add into the mix most of the activities that accompany the database restore, such as:

  • Changing database owner after restore is completed;
  • Cleaning up existing permissions on the restored database;
  • Restoring previous permissions, in case the database is being replaced with a fresh copy.

On top of that, let’s make some of them optional, increasing the flexibility of the solution.

Writing the code

Once again, there is no reason not to use dbatools commands for this purpose, similarly how most of us wouldn’t want to reinvent the wheel. The scenario is going to be quite simple:

  1. Define parameters – connections, database names, paths, etc.
  2. Backup the source database to a network share with COPY ONLY
  3. Export permissions from the target database if needed
  4. Restore the target database
  5. Change database owner
  6. Remove all users from the database if needed
  7. Re-apply old permissions if needed

The restore process, when it comes to replacing an existing database, is a very delicate operation, so let’s make sure that the code would stop whenever an error is occurred. This will be achieved by using -EnableException switch of the dbatools functions, that will throw an error instead of a simple warning, which is a default behavior. Let’s also define $ErrorActionPreference = 'Stop' to ensure that the script is not executed any further once the error has occurred. Lastly, it would be a good idea to save existing permissions to the disk drive in case a manual intervention is required.

Most of the output will be hidden in the Verbose stream, which allows us to control the amount of output we want to see on our end. Just add a -Verbose switch, if you need to see more!

Here is the script we are going to use this time. It has slightly more action inside, but it still consists of the same building blocks, most of which are regular dbatools commands. Let’s name it Copy-Database.ps1:

[Cmdletbinding()]
Param (
    [Parameter(Mandatory)]
    [string]$SourceServer,
    [Parameter(Mandatory)]
    [string]$SourceDatabase,
    [string]$TargetServer,
    [Parameter(Mandatory)]
    [string]$TargetDatabase,
    [Parameter(Mandatory)]
    [string]$BackupFilePath,
    [string]$DBOwner,
    [switch]$KeepPermissions,
    [switch]$DropUsers
)

#Stop on any error by default
$previousErrorActionPreference = $ErrorActionPreference
$ErrorActionPreference = 'Stop'

#Temporary permissions will be stored here
$permissionsFile = ".\Permissions-$TargetDatabase.sql"

#If target server is not specified, it's assumed that it's going to be the same server
if (!$TargetServer) { $TargetServer = $SourceServer }

if (Test-Path $BackupFilePath) {
    #Removing old temporary backup if it still exists for some reason
    Write-Verbose "Removing old backup file $BackupFilePath"
    Remove-Item $BackupFilePath
}
#Run copy-only backup
Write-Verbose "Initiating database backup`: $SourceServer.$SourceDatabase to $BackupFilePath"
$backup = Backup-DbaDatabase -SqlInstance $SourceServer -Database $SourceDatabase -BackupFileName $BackupFilePath -CopyOnly -CompressBackup -Checksum -EnableException

if (!$backup.BackupComplete) {
    throw "Backup to $BackupFilePath was not completed successfully on $SourceServer.$SourceDatabase"
}

#Record and store permissions
if ($KeepPermissions -or $DropUsers) {
    $permissions = Export-DbaUser -SqlInstance $TargetServer -Database $TargetDatabase
    Write-Verbose "Exported permissions from $TargetServer.$TargetDatabase`: $permissions"
    Write-Verbose "Storing permissions of $TargetServer.$TargetDatabase in a file $permissionsFile"
    $permissions | Out-File -FilePath $permissionsFile
}

#Restore the database on top of existing DB
Write-Verbose "Initiating database restore`: $BackupFilePath to $TargetServer.$TargetDatabase"
$backup | Restore-DbaDatabase -SqlInstance $TargetServer -DatabaseName $TargetDatabase -WithReplace -ReplaceDbNameInFile -EnableException

#Update database owner, stop throwing errors at this point, as further actions are not critical
$ErrorActionPreference = $previousErrorActionPreference

if ($DBOwner) {
    Write-Verbose "Updating database owner to $DBOwner"
    Set-DbaDatabaseOwner -SqlInstance $TargetServer -Database $TargetDatabase -TargetLogin $DBOwner
}

#Drop users if requested
if ($DropUsers) {
    $users = Get-DbaDatabaseUser -SqlInstance $TargetServer -Database $TargetDatabase -ExcludeSystemUser
    foreach ($user in $users) {
        Write-Verbose "Dropping user $($user.Name) from $TargetServer.$TargetDatabase"
        try {
            $user.Drop()
        }
        catch {
            # No need to throw, maybe a user owns a schema of its own
            Write-Warning -Message $_
        }
    }
}

#Restore permissions
if ($KeepPermissions) {
    Write-Verbose "Restoring permissions of $TargetServer.$TargetDatabase from a file $permissionsFile"
    Invoke-DbaSqlQuery -SqlInstance $TargetServer -Database $TargetDatabase -Query $permissions
}

#Remove backup file
if (Test-Path $BackupFilePath) {
    Write-Verbose "Removing backup file $BackupFilePath"
    Remove-Item $BackupFilePath
}

#Remove permissions file
if (Test-Path $permissionsFile) {
    Write-Verbose "Removing permissions file $permissionsFile"
    Remove-Item $permissionsFile
}

Creating a new Jenkins job

Configuration check

Let’s re-iterate on this and ensure that our Jenkins instance has all necessary tools to implement the job we have in mind.

Plugins list

The following Jenkins plugins are going to be used in this job:

Agent setup

Ensure that the Jenkins executor(s), which you intend to use, runs under an account that was granted with sysadmin privileges on both source and target SQL Servers. Make sure to confine the job to that particular executor(s) using labels.

Also let’s ensure that executor node(s) has the latest version of dbatools installed:

Install-Module dbatools

Finally, let’s copy our Copy-Database.ps1 script to the local folder using the same exact path across all the Jenkins nodes, and ensure that job executor has r/w access to that folder. If you want to use a remote folder (file share), make sure that your Execution Policy is set to RemoteSigned on each node: Set-ExecutionPolicy -ExecutionPolicy RemoteSigned.

For advanced Jenkins users I would recommend using a Source Control Management repository to store your code and retrieve it using the SCM section in the job definition.

Creating the job

Finally, it’s time to create the job! Start with clicking ‘New’ item on the left-hand side of the Jenkins window:

2018-08-03_10-42-58

Input a new project name and make sure to choose a Freestyle project, which is a most generic Jenkins job type:

2018-08-03_10-45-07

In the job configuration window, set the project to be parameterized (click the corresponding checkbox). Add 4xString parameters (server and database names for both source and destination), and 2xBoolean parameters (those will represent our optional switches):

2018-08-03_10-48-22

Parameter name of each of the added parameters will become an environment variable, so make sure to not to use special characters in the parameter name.

2018-08-03_10-47-42

Provide a descriptive definition for the parameters:

  • String parameter 1
    • Name: SourceServer
    • Default value:
    • Description: Full server name of the source instance
    • Trim the string: Yes
  • String parameter 2
    • Name: SourceDatabase
    • Default value:
    • Description: Name of the source database to copy
    • Trim the string: Yes
  • String parameter 3
    • Name: TargetServer
    • Default value:
    • Description: Full server name of the target instance
    • Trim the string: Yes
  • String parameter 4
    • Name: TargetDatabase
    • Default value:
    • Description: Name of the target database. Will be replaced, if exists.
    • Trim the string: Yes
  • Boolean parameter 1
    • Name: KeepPermissions
    • Default value: Unchecked
    • Description: Check if you want to re-apply all the previously existed permissions on top of the restored database
  • Boolean parameter 2
    • Name: DropUsers
    • Default value: Unchecked
    • Description: Check if you want to drop all the users from the target database upon restore.

If you want this project to run only on specific agent nodes, click Restrict where this project can be run and provide space-separated list of labels in the field:

2018-08-03_11-00-22

Now we have our job parameterized and running only on specific nodes, the only thing left is to pass those parameters to the Powershell script. In the Build section click Add build step and select Windows PowerShell:

2018-08-03_12-00-12

In the Command section let’s put a call to our script file, that should be already accessible for the agent nodes, and pass all the parameters from your job:

& D:\Scripts\Copy-Database.ps1 -SourceServer $env:SourceServer `
  -TargetServer $env:TargetServer `
  -SourceDatabase $env:SourceDatabase `
  -TargetDatabase $env:TargetDatabase `
  -KeepPermissions:([bool]::Parse($env:KeepPermissions)) `
  -DropUsers:([bool]::Parse($env:DropUsers)) `
  -BackupFilePath "\\Some\Path\$($env:SourceDatabase).bak" `
  -DBOwner sa `
  -Verbose

Several things to note about this script:

  • $env:ServerName is a call for an environment variable, that was defined by the job parameter.
  • [bool]::Parse($env:KeepPermissions) is used to convert the job’s parameter to [bool]. All job parameter are strings in essence and boolean parameters will have one of the following values: ‘true‘ or ‘false‘. We’re simply converting them to [bool]  using a public method Parse of the [bool] class.
  • -DropUsers:([bool]::Parse($env:DropUsers)) is used to pass a boolean value to a switch parameter, which requires a colon notation.

Save the job by pressing the Save button on the bottom of the screen.

 Running the job

Let’s start the job now! Click Build with Parameters from the job overview screen:

2018-08-03_12-46-20.jpg

Input all the parameters and start the job:

2018-08-03_12-50-12.jpg

In the Build History section of the page, you will see the progress of the job. By clicking on it you would be able to see the details, such as Console Output:

2018-08-03_12-52-47

Conclusion

There are multiple ways of introducing automation to your environment, and Powershell is definitely one of them, yet, sometimes it’s hard for people to embrace automation without a simple UI that would help them to do the first step.

Jenkins is one of such tools that enables anyone to use automated solutions with little to no knowledge of the code that is executed behind the curtain. It can be just a frontend for your Powershell scripts, or it can become an ultimate tool in your hands that controls multiple aspects of your day-to-day tasks, and is integrated with your whole enterprise.

Cheers! And stay tuned – there is more to come!

Restore database job – SQL Server Jenkins labs (Part 1)

Hello there, and welcome to SQL Server Jenkins labs!

Some time ago I talked about using Jenkins as a robust scheduling tool, and now it is absolute time to follow up on that, and demonstrate some of the solutions that can be achieved by using a combination of Powershell, dbatools and Jenkins.

The first example to go down that pipe is going to be a simple backup restore job. The job is going to get backup history from the source server, build a proper restore sequence and perform the restore on a server of your choosing. The restore step would also:

  • Change file paths of the database files to a default data/log directory
  • Rename database files to match the new database name
  • Optionally, perform a point-in-time recovery

Preparations

First, let’s ensure that our Jenkins instance has all necessary tools to implement this job.

Plugins list

The following Jenkins plugins are going to be used in this job:

You can setup plugin by clicking Manage Jenkins on the left-hand side menu and choosing Manage Plugins in the list. On the PluginManager page, switch to the Available tab and use Filter textbox to search for the plugins you want to install.

2018-08-03_14-15-21.jpg

After you selected the plugins, click Install without restart. Most of the plugins do not require restart, however, some of them will not work properly until the whole Jenkins instance is restarted.

Agent setup

Ensure that the Jenkins executor(s), which you intend to use, runs under an account that was granted with sysadmin privileges on both source and target SQL Servers. Depending on your agent type, it can be configured in Manage Jenkins -> Manage Nodes (Windows service agent) or in the Services console on your server (Java agent).

If you have more than one agent, make sure that all of the agents are properly configured, otherwise set up agent Labels that would allow you to confine the job to that particular executor(s).

Also let’s ensure that executor node(s) has the latest version of dbatools installed:

Install-Module dbatools

Creating a new Jenkins job

Start with clicking ‘New’ item on the left-hand side of the Jenkins window:

2018-08-03_10-42-58

Input a new project name and make sure to choose a Freestyle project, which is a most generic Jenkins job type:

2018-08-03_14-24-52.jpg

In the job configuration window, set the project to be parameterized (click the corresponding checkbox). Add 5 String parameters by clicking the Add Parameter button; these parameters would represent server and database names for both source and destination databases, as well as a restore time parameter for point-in-time recovery:

2018-08-03_14_48-22

Parameter name of each of the added parameters will become an environment variable, so make sure to not to use special characters in the parameter name.

2018-08-03_10-47-42

Provide a descriptive definition for the parameters:

  • String parameter 1
    • Name: SourceServer
    • Default value:
    • Description: Full server name of the source instance
    • Trim the string: Yes
  • String parameter 2
    • Name: SourceDatabase
    • Default value:
    • Description: Name of the source database to copy
    • Trim the string: Yes
  • String parameter 3
    • Name: TargetServer
    • Default value:
    • Description: Full server name of the target instance
    • Trim the string: Yes
  • String parameter 4
    • Name: TargetDatabase
    • Default value:
    • Description: Name of the target database. Will be replaced, if exists.
    • Trim the string: Yes
  • String parameter 5
    • Name: ToDate
    • Default value:
    • Description: Specify if you want the database to be restored to a certain time, or leave blank to retrieve the latest backup
    • Trim the string: Yes

If you want this project to run only on specific agent nodes, click Restrict where this project can be run and provide space-separated list of labels in the field:

2018-08-03_11-00-22

Now we have our job parameterized and running only on specific nodes, let’s add a Powershell build step, that would represent the actual restore process. In the Build section of the job creation page click Add build step and select Windows PowerShell:

2018-08-03_12-00-12

In the Command textbox of the new Windows Powershell item insert the following code:

$p = @{
    SqlInstance = $env:TargetServer
    Database = $env:TargetDatabase
}
if ($env:ToDate) {
    $p += @{ RestoreTime = $env:ToDate}
}

Get-DbaBackupHistory -SqlInstance $env:SourceServer -Database $env:SourceDatabase | Restore-DbaDatabase @p -WithReplace -ReplaceDbNameInFile -TrustDbBackupHistory

 

This code will call two procedures from the dbatools module:

  • Get-DbaBackupHistory – retrieve backup history from the source server. Parameters:
    • SqlInstance – source SQL Server name
    • Database – source database name, which would be used to search for backup records
  • Restore-DbaDatabase – out of all the backup history records, restore the most relevant files, starting with full and followed by diff and log backups. Parameters:
    • SqlInstance – target SQL Server name
    • Database – desired database name on the target server
    • RestoreTime – will restore the database to the specified date/time. This parameter is added dynamically, based on whether the ToDate job parameter was specified.
    • WithReplace – replace the target database if it exists
    • ReplaceDbNameInFile – replace the part of the filename that contains the source database name with a new name
    • TrustDbBackupHistory – skip reading backup headers from the files, relying instead on the information from the source server

The parameters of the job are persisted through the execution as environment variables, which is why in order to use them we’ll have to use $env:VariableName notation.

Save the job by pressing the Save button on the bottom of the screen.

Running the job

Let’s start the job now! Click Build with Parameters from the job overview screen:

2018-08-03_14-45-02.jpg

Input all the parameters and start the job:

2018-08-03_14-47-07

In the Build History section of the page, you will see the progress of the job. By clicking on it you would be able to see the details, such as Console Output:

2018-08-03_14-49-02

Conclusion

In this example we used very few moving parts, which, however, enabled us to create a simple and effective tool that, from now on, is going to save us hours of time when it comes to restoring databases.

Stay tuned for the future updates!

Special Agent Jenkins

I saw (and wrote myself) thousands of lines of Transact-SQL code that interacts with something outside of the SQL Server instance box: be it an executable file, or a remote query to a different SQL server, or something as simple as running a Powershell statement. I still remember how excruciatingly painful it is to wrap up simple OS commands into a string variable, feed it into xp_cmdshell, create tables to retrieve results, parse them using T-SQL… The list can go on and on, but the main idea is: such code is hard to read, even harder to maintain and a simply a nightmare to scale.

Nine times out of ten these scripts were triggered by a SQL Server Agent job; mostly, because, when it comes to scheduling, in 95% of the cases SQL Server administrators rely on SQL Server Agent – it is our ‘go-to’ solution. However, even while SQL Server Agent has many positive traits, it also has certain limitations in some of the scenarios. Most notable examples include (but not limited to):

  • Email interactions: basic notifications after job completion are usually not detailed enough. sp_send_dbmail in advanced use-cases as an attempt to overcome this deficiency forces us to write complicated logic inside our code to accommodate for various scenarios.
  • Interactions with remote SQL Servers: mostly relies on Linked Servers, which come with an overhead of additional maintenance. Requires advanced logic (and maintenance) if Availability Groups are involved – to decide whether the AG database is active on the current server.
  • Other remote interactions: Powershell remoting through CmdExec task running a Powershell script…? Writing results to a temporary table to pass it on to the next step? Using proxy accounts to interact with the network shares? There are many workarounds, but all of them require a wrapper of some kind, which, in addition to the complex syntax, are in most cases not overly secure.
  • Running code that comes from a different programming language: requires code interpreter to be installed on a SQL box, which I rarely consider a best approach, knowing how easy it is to forget about ‘custom‘ configuration. Powershell was an exception to that rule, except until SQL 2016 it only came as ‘SQL Server Powershell’ that had certain quirks and limitations. Debugging such steps was/is a nightmare of its own due to limited exception handling capabilities.
  • Capturing output (and storing it somewhere on the disk) has different flavors, but usually one of the following occurs:
    • There are thousands of log files in your log folder
    • All the logs are stored in msdb; comes with additional headache of retrieving them when needed, cleaning them up and including into the email notifications
    • Log file is configured to append output to the existing file and is now several megabytes long
    • You only have access to the latest execution log
  • Execution timeouts – ability to stop the job after certain duration can only be achieved by having yet another job to control that.

Now, I am not trying to say that SQL Agent should never be used. It is convenient and utterly reliable at some tasks, like running stored procs or native T-SQL on that same server; it can easily be managed by native SQL Server tools (SSMS, T-SQL); it has advanced options, such as multi-server management and so on. What I’m trying to say is: there are other options.

Jenkins – work smarter not harder

jenkins-ci_512
Mr. Jenkins himself with an automation towel

Jenkins is a well-known DevOps automation solution, and is widely used across the globe in all kinds of automation tasks. Once you familiarize yourself with Jenkins basic concepts, it becomes a powerful tool in your hands that allows you to build almost any chain of automation events, linking together tasks coming from different programming languages, applications or even operating systems.

More importantly here, Jenkins is completely independent of SQL Server, meaning that Jenkins jobs will continue to trigger and execute even when SQL Server is not running; jobs will be able to talk directly to Availability Group listeners instead of deciding whether or not to run; and last but not least – scalable architecture will allow you to run any number of jobs, and security configuration will be flexible enough to provide all kinds of access to end-users.

2018-05-03_9-47-51
Jenkins main page

Here are some of the tasks that can easily be achieved by using Jenkins instead of SQL Server Agent (and relevant examples):

  • Maintenance tasks across the whole environment: run your code on any number of servers with the help of Powershell and parametrized jobs.
    • Recurring backup-restore operations – achieved by using dbatools Powershell module.
    • Deploying scripts or database projects from source-controlled repositories – can be done with dbops Powershell module
    • Initiating failovers for planned maintenance operations – T-SQL or Powershell
  • Reports: invoke scripts, generate output files, send out notifications
    • Collect data from your servers using tools from Brent Ozar’s first responder kit and feed it into PowerBi reports
    • Run tests against your environment to ensure stability using dbachecks Powershell module
    • Running tSQLt tests against your development databases after deploying the code
    • Put together a notification email and send it to multiple groups of users
    • Notify admin about recent changes to the job that led to failure.
  • Chaining jobs together: jobs can trigger other jobs, wait for the child jobs to complete and aggregate the output.
    • Deploying code in multiple environments – one following another  – with the help of the dbops Powershell module
    • Setting up custom pipelines with manual or automatic approvals and notifications
    • Include database deployment as a part of application delivery pipeline
  • Working with artifacts: Jenkins will keep all the files and logs (called artifacts) produced by the task and grant the ability to view or download files in one click.
    • Keep execution logs and errors for an extended period
    • Publish report files
  • Interaction with other systems – any Jenkins control element can be called using a rich REST API
    • Trigger Jenkins jobs execution from SCM webhooks, Powershell or Python call
    • Retrieve job status information
    • Retrieve job artifacts using a simple web call
  • Integration with source code management systems: Git, SVN, TFS, you name it.
    • Deploy code by submitting commits to SCM
    • Run automated tests for your branches using multi-branch pipeline

Jenkins Setup

The installation steps are very simple:

  • Download latest Jenkins from the Jenkins website. You will be able to select between an installation package, Docker image or a simple jar file.
  • Start Jenkins using a method appropriate to a chosen distribution. More info here.
  • Go to http://localhost:8080/ and follow the instructions.

Jenkins Architecture

A simplified schema of Jenkins components might look like this:

2018-05-02_23-50-46
Top-level Jenkins design

Jenkins Master – your primary Jenkins server that keeps all the metadata, provides the interface and primary functions.

Jenkins Agents – Jenkins processes running on a computer that can be used to execute Jenkins jobs. Each Jenkins master has a built-in agent by default, but can easily be extended by adding more agents. Agents are communicating through network and can be configured to run as a certain OS user.

Executor – an execution space inside a Jenkins Agent that provides a single job queue to execute Jenkins jobs – one at a time.

Jenkins Job – a set of predefined activities executed one after another. A job execution attempt triggers a Build that can either fail or succeed and will contain relevant artifacts and logs. Each Build runs in a Workspace – dedicated working folder in Jenkins home directory.

2018-05-03_9-56-39
Jenkins has some really good-looking trend visualizations

Jenkins Plugins allow Jenkins to extend its capabilities by using free community-driven extensions. The extensions usually add new functionality or modify existing behavior. Actually, let’s talk more about Jenkins plugins, as they are an integral part of Jenkins setup.

Jenkins Plugins

One of the best reasons to use Jenkins is that, while it’s distributed with bare minimum of the components, its functionality can be (and should be) extended by picking from thousands of plugins coming from a huge Jenkins community. Actually, there are so many of them that sometimes it’s really confusing where to start. Here’s the list of plugins that I find very useful for all intents and purposes:

  • Active Directory – enables authentication through Active Directory
  • Blue Ocean – beautiful interface to build and control pipelines
  • Build Pipeline – adds a dashboard that can show jobs chained together into a build pipeline
  • Build Timeout – terminates the build after predefined amount of time
  • Conditional BuildStep – adds conditions to the job steps if necessary
  • Copy Artifact – copies artifacts from previous builds/other jobs
  • Email Extension – enhanced email experience
  • Folders – allows you to create folders for jobs
  • Git – integrates Jenkins with Git
  • Parametrized Trigger – allows to trigger jobs with parameters
  • Powershell – allows to run Powershell scripts or ad-hoc commands
  • Promoted Builds – promotes build status based on certain criteria and adds a possibility of manual interactions in the job chain (aka manual promotion)
  • Timestamper – adds timestamps to the log
  • Workspace Cleanup – allows to cleanup the workspace before/after build

Conclusion

Jenkins can become an important asset in your day-to-day work as a SQL Server professional and bring a whole new level of possibilities. However, it will only make sense to use it for a certain kind of tasks, which can be characterized as operating outside of SQL Server instance scope. It will not be able to completely replace the functionality and convenience of SQL Server Agent, yet in certain scenarios it can become your faithful ally.

PS

2018-05-03_1023
Who would win: SQL Server special Agent or one Jenkins boi

If I was asked, in which situations I would use Jenkins instead of Agent, the answer would probably be ‘it depends’, and it really does! I tried to come up with some kind of generic scenarios, in which one of the two tools will be a more logical choice based on my experience. However, these are only recommendations and, after all, there is always more than one right answer, so – feel free to experiment!

Scenario Tool of choice Comments
Execute SQL Server query SQLAgent Easy to setup and use
Execute OS command or a script Jenkins Better logging and troubleshooting, no dependencies on SQL Server, easily scalable
Run SSIS package SQLAgent Natively supported by Agent
Retrieve data from a single server SQLAgent A single Linked Server is easy to maintain
Retrieve data from multiple servers Jenkins Will take 1-2 lines of Powershell code instead of 20+ minimum in Agent; easily scales
Perform maintenance operation on a database SQLAgent Transact-SQL operations are handled perfectly from the SQL Server Agent
Perform maintenance operation on an instance/server/cluster/AG Jenkins No dependencies on SQL Server itself; can control status remotely
Execute SQL code from a remote source Jenkins Integration with SCM; plugins to control the behavior
Multi-step T-SQL job SQLAgent Has easily configurable order with actions on success/failure
Multi-step mixed language job Jenkins Easy to troubleshoot, more options, reliable
Trigger job on a SQL Server event SQLAgent Can use native tools (events or alarms), can be called from code
Trigger job on a remote application call Jenkins No permissions on SQL Server required, REST API, no DB connectivity needed

Corruption in system object possibly due to schema or catalog inconsistency

I’ve been playing with the database snapshot feature on one of my lab SQL 2016 servers, when suddenly the snapshot operations started to throw me some weird errors:

Msg 5901, Level 16, State 1, Line 1
 One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
 Msg 1823, Level 16, State 2, Line 1
 A database snapshot cannot be created because it failed to start.
 Msg 211, Level 23, State 8, Line 1
 Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Well, hello there. To be honest, the server I was playing on was one of those servers that might easily lose battle to one of modern calculators, in other words, old SCSI drive “potato” server with decent number of cores and non-existing disk performance. Which, probably, triggered this corruption, because I was doing multiple attempts of snapshot creation -> restore from snapshot -> remove snapshot chain and running a restore operation in parallel. Each snapshot operation (that usually is instantaneous) was taking about 30 seconds.

One of them apparently was fatal for the database. So, what’s the plan, Doc?

Diagnostics

Since it’s a pretty small database, I can go with full-blown DBCC CHECKDB with EXTENDED_LOGICAL_CHECKS (you can consider running lighter checks first though, see https://docs.microsoft.com/en-us/sql/t-sql/database-console-commands/dbcc-checkdb-transact-sql):

DBCC CHECKDB ('ci_dbUp') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS
Msg 5901, Level 16, State 1, Line 1
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 1823, Level 16, State 2, Line 1
A database snapshot cannot be created because it failed to start.
Msg 1823, Level 16, State 8, Line 1
A database snapshot cannot be created because it failed to start.
Msg 7928, Level 16, State 1, Line 1
The database snapshot for online checks could not be created. Either the reason is given in a previous error or one of the underlying volumes does not support sparse files or alternate streams. Attempting to get exclusive access to run checks offline.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

DBCC CHECKCATALOG gives pretty much the same output.

SQL Server logs: same error number, memory dumps, no specifics.

I could try doing a restore (or partial restore), but the database is not in a logged mode, which would enable me to go with page restore. Also, msdb.dbo.suspect_pages table shows me zero suspected pages, so probably nothing that can be done here except for a full restore to the previous night.

Backup source files

Before doing any kind of recovery, it’s always a good idea to create a backup of your database files. Regular backup probably won’t do – you generally want a physical copy of the files, but we can still have it – just in case:

BACKUP DATABASE ci_dbUp TO DISK = 'd:\temp\ci_dbUp.bak'
Msg 5901, Level 16, State 1, Line 1 
One or more recovery units belonging to database 'ci_dbUp' failed to generate a checkpoint. This is typically caused by lack of system resources such as disk or memory, or in some cases due to database corruption. Examine previous entries in the error log for more detailed information on this failure.
Msg 3013, Level 16, State 1, Line 1
BACKUP DATABASE is terminating abnormally.
Msg 211, Level 23, State 8, Line 1
Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Hmm, okay. How about physical file copy? It will, however, require you to put the database into the OFFLINE mode, which means it probably won’t come back online willingly, as all the databases are required to pass the basic consistency checks before doing so, which obviously won’t happen this time. From here on, we’ll have to work with the database in an emergency mode, which disallows any access to the database.

ALTER DATABASE ci_dbUp SET OFFLINE WITH ROLLBACK IMMEDIATE

.. did not work!

Corruption in database ID 8, object ID 41 possibly due to schema or catalog inconsistency. Run DBCC CHECKCATALOG.

Alrighty then, let’s shut down the whole server and copy the database files while it’s offline. That, of course, worked, but now we have our database in a ‘failed to recover’ state:

Surprisingly, it did not perform any consistency checks and simply put the database in the ‘Restoring’ mode:

The database 'ci_dbUp' is marked RESTORING and is in a state that does not allow recovery to be run.

Hmm, apparently, re-do log failed to apply after restart. Which means, it makes perfect sense to reconstruct the log file. Since I have the original files copied now, I would work on copies instead of originals.

Attach the database without the log file

Let’s try to re-attach the files with ATTACH_REBUILD_LOG parameter:

CREATE DATABASE ci_dbUp2 ON
( name = 'ci_dbUp', filename = 'I:\MSSQL13.MSSQLSERVER\ci_dbUp2.mdf'),
( name = 'ci_dbUp_log', filename = 'J:\MSSQL13.MSSQLSERVER\ci_dbUp2.ldf')
FOR ATTACH_REBUILD_LOG
Command(s) completed successfully.

Would you look at that! Apparently, there was some kind of stale transaction, that somehow kept an internal lock on a system table and couldn’t be killed by regular means. The only option was to remove that transaction from the re-do log by resetting the transaction log file, which effectively left me without the latest and greatest transactions, but that’s still pretty good result!

Now that the database is proven to be restore-able without data loss inside the data file, we can try running some DBCC and see if it brings any better results. But at this point we have already recovered the database – all checks are returning no errors whatsoever; snapshots and backups are working as expected.

Running DBCC CHECKDB with repair options

Since the database is in the recovering mode, the only way of bringing it online is to create an empty database with the exact same structure and then substitute the database files with the bad ones.

CREATE DATABASE [ci_dbUp_R]
 ON  PRIMARY 
( NAME = N'ci_dbUp', FILENAME = N'I:\MSSQL13.MSSQLSERVER\ci_dbUp_R.mdf' , SIZE = 32768KB , FILEGROWTH = 10%)
 LOG ON 
( NAME = N'ci_dbUp_log', FILENAME = N'J:\MSSQL13.MSSQLSERVER\ci_dbUp_R_log.ldf' , SIZE = 16384KB , FILEGROWTH = 10%)

Now, replace the files with the ones I’ve saved before:

ALTER DATABASE ci_dbUp_R SET OFFLINE WITH ROLLBACK IMMEDIATE
-- ...copy files...
ALTER DATABASE ci_dbUp_R SET ONLINE

Time for surgery! Let’s enable emergency mode for our database:

ALTER DATABASE ci_dbUp_R SET EMERGENCY

Now it looks like this:

emer

Confirming that the database is still in an inconsistent state:

DBCC CHECKDB ('ci_dbUp_R') WITH NO_INFOMSGS, ALL_ERRORMSGS, EXTENDED_LOGICAL_CHECKS

… and it returned no errors! Let’s try to bring it back online:

ALTER DATABASE ci_dbUp_R SET ONLINE

Command(s) completed successfully.

Awesome! We didn’t even need to run the recovery – re-attaching under a different name has completely resolved the issue.

Database recovery is always fun, as long as it’s not the production one. Stay tuned!

Job schedule at a glance

I often asked myself: can some of the SQL Server jobs I have be the reason of database slowness users are currently experiencing? On multiple occasions that was indeed the case and I had to reschedule some of them, because there were too many running at the same time. For example, re-indexing and full backups should really never run together, because that creates a huge strain on the server (and also makes your differential backups bigger). In order to see the clear picture of the jobs that are running on the server during the week, I have created a report that helped me to bring order (and justice) to my job schedules.

I present to you Job Schedule Matrix:

2018-01-12_8-55-02

This is an SSRS report that utilizes matrix object to display the frequency of the jobs running on your environment. It also comes with an ETL script to extract data from the SQL Servers and upload it to the database of your choice. It also can support other job types as long as you write your own ETL script and specify appropriate Server Type.

Download it from here: https://github.com/nvarscar/schedulematrix-report

It might be a little slow when you tell it to show all of your jobs from a multitude of servers, but otherwise works relatively quick. There is also a chart-based version that looks slightly more ugly, but overall is even faster.

Installation instructions are in the readme, and let me know if you have any problems with it!

PS if you are using different server types (other than SQL Server), make sure to publish instance names even if there is no such thing, as this is what you will eventually see in the report body.

Deploying SQL Server code: the good, the bad, the ugly

I’ve been asked to run anywhere from 50 to 200 deployment scripts more than several times throughout my career and it has always been a pain in the back to make sure that you are running them in a correct order in a proper database… Don’t know about you, but I never was a fan of mundane labor, even when it comes to deployments to prod.

Obviously, there should be already solutions on the market that allow you to do that without any manual intervention. But, turns out, there are not too many of them, and each has its own framework or even its own scripting requirements. As such, there are only two methodologies to consider: model-based and script-based deployments.

Model-based approach

In this deployment method, change code is generated by comparing current database with the source code of the database somewhere in the repository and generating transformation scripts for each single deployment.

The good:

  • Will ensure that database schema is 100% the same across all of your environments. Every deployment might be containing different transformation scripts, but the resulting schema will always be at it’s finest: all the changes in the schema will be enforced regardless of the database state.
  • Best possible integration with the source control management tools – every object has its own script, easy to branch out, easy to deploy – as long as you have necessary deployment tools.
  • No need to write transformation scripts – they will be generated automatically. Which means, no more sequential database upgrades from version A to version B: all of the changes would be implemented organically within one single deployment.
  • Team-oriented: easy to integrate multiple changes at once, as each object is treated independently.
  • Can easily be automated using any of the automation providers: Jenkins, Bamboo, VSTS, you name it. No need to track versions, deployment scenario is simple and straightforward. 3-rd party tools usually come with an impressive list of CI plugins and modules for CLI.

The bad:

  • Unpredictable transformations. With automatically generated code, it can be as efficient, as it can be not. Things like re-creating tables and indexes because of modified object definition can easily happen.
  • No automatic rollback procedure possible. Once the code is successfully deployed, there is no way back – the rollback will always be manual, be it a custom script or restore from a backup.
  • Once you started following this path you should be wary of using any other methodology, because objects that exist outside of the database definition in your source control might be removed from the database during next model-based deployment. Every single team should be on the same page and using the same set of tools.
  • The development and deployment process is somewhat different from the classical script-based approach and it will take time (and effort) for developers and administrators to adapt to the changes.

The ugly:

  • Black box deployment – you no longer control which code you run against your database server.
  • Not all of the frameworks can handle data manipulation statements during the deployments. In some of them you can source-control the whole table content (and enforce it every deployment), while others would only allow to import rows from a plain text file source. You can always deploy DML scripts outside of the model-based framework, but at that point you’ll have to support two independent frameworks.
  • While strict deployment scenario for your databases bring you excellent level of uniformity among your environments, it also brings along lots of limitations, and the biggest of them is that you become dependent on 3rd-party tools and can only operate within limits defined in those tools.

TL;DR

As a whole, this deployment method seems really promising and reliable, whenever it comes to managing multiple versions of the schema, especially when multiple teams are working on different features at the same time. This methodology would really help to integrate all the changes together and allow for Agile deployments, as there is no need in maintaining the order of the deployments and little to no dependency on the destination database code. However, it introduces some limitations to the deployment process, and while it might not be a big deal for some shops, it can easily become a show-stopper in others.

Deployment tools

There is a number of tools built around this deployment method. Here are the most popular ones:

Script-based approach

This more traditional deployment model relies on a series of transformations in order to achieve certain database state. Transformations are described in the code and are deployed in an order defined by developer(s).

Deployments in this methodology are usually done in bundles, where one or more SQL scripts are bundled together and represent one or more changes to the database structure. The main issue with automating such kind of deployments is that the scripts should always be run in the same exact order; deployment bundles should be deployed in specific order as well, thus requiring deployment framework to track the state of the database after each deployment.

The most common way I see around is to save script names that have been deployed to the database into a table within the same database:

2017-11-20_16-41-57

All the scripts are executed in alphabetical order. Once the script has been executed, it is recorded into the tracking table and will never be executed again to avoid schema corruption. Other option out there would be to write a code that is re-runnable, but certain order is still a necessity.

The good:

  • Can be adopted pretty much at any moment, as it does not introduce any alterations to the existing manual deployment process.
  • All transformations are visible in the code and will be identical in every single environment.
  • Custom DML scenarios can be easily added to the deployment scenario.
  • Developers can use any means of developing the code.
  • Simple structure allows for deep customization (transaction handling, pre/post-scripts, variable replacement, etc.).
  • Rollback scripts can be a part of the deployment bundle.

The bad:

  • Requires manual packaging most of the time – grouping scripts together into bundles and defining execution order.
  • Any change outside of the deployment framework has a chance to fail the deployment, as the scripts are expecting the database schema to be in a certain state.
  • If the database has more than one developer, there is a chance that two different scripts would have conflicting statements.
  • More opportunities for malicious code to get into the database due to using deployment scripts. Pre-deployment code analysis might be a good idea.

The ugly:

  • You can never be 100% sure that the database schema is valid. Manual changes will only flag once the deployment has failed due to inconsistency in the database schema.
  • All the transformations have to be applied in due order. Even if pending changes are nullified further on among the commands in a next-in-line bundle, they still have to be applied.
  • Schema version table will be created in each and every database. Both developers and administrators should be aware of it, as it controls the deployment behavior.

TL;DR

The most attractive part of this particular methodology is that it doesn’t really require any special development tools and at the same time does not introduce any limitations to the deployment scenario. Any deployment that involves running scripts can be integrated into a deployment bundle and be deployed automatically, as long as you have corresponding deployment tools. It is very straightforward and flexible, but also comes with a caveat of having more manual configuration/deployment steps as compared to model-based deployments.

Deployment tools

It is surprising, but there are not so many deployment tools that you would be able to use out-of-the box for this kind of deployments. The reason behind that, is probably due to the fact that the concept is actually very simple and it can be easier to write a home-grown framework on your favorite scripting language than adopt an existing solution. Here are some of them:

  • Flyway (free executable)
  • DbUp (open-source .Net library, requires wrapper code to initiate the deployment)
  • dbdeploy (free java library, initialized by an ant script)
  • MigrateDB (free executable, uses custom script format)
  • AutoPatch (open-source java application)

Post Scriptum

There are a few other resources that are definitely worth checking if you want to learn more about automated deployments: