Today we are proud to have an interview between Stephanie Herr from Redgate and Stefan Mieth from AIT to talk about Database Lifecycle Management (DLM) and Application Lifecycle Management (ALM). While Microsoft gives us the SQL Server Data Tools (SSDT) as a good tooling for developing databases and the ability to store them into Team Foundation Server Version Control, it is only a entry point into the whole DLM process. Even though we already have successfully finished several database projects using this toolset and because we are always looking for better and more efficient tools, this is sometimes just not enough…
Stephanie is a DLM Expert, and will tell us what DLM is and how it integrates into our application development and Application Lifecycle Management.
Stephanie, nice to see you! Could you short introduce yourself, tell us a little about what makes you tick and what you’re focused on?
Stefanie Herr, Redgate
Sounds like a good fit for everyone! So, today’s topic is DLM – Database Lifecycle Management is very important for all who try to develop their own application with a Database at the back, but I don´t think everybody is familiar with DLM. Can you give us a short description of what DLM is?
Database Lifecycle Management is a set of processes and tools that include the database in Application Lifecycle Management. That sounds a little too simplified… It’s all about making changes to your database and delivering those database changes to your users using a reliable, repeatable process that is integrated with the rest of your software development.
It’s important that the database is not a bottleneck when it comes to releasing your applications, especially in the world of agile development and frequent releases. We want to use tools for our database development like version control, continuous integration, and automated testing. Application developers have been taking advantage of these for years and it’s time we took advantage of them for the database. Some teams we’ve seen are even taking this to the next step and doing automated deployments or easing the deployment process by having a “one-click” process that’s repeatable and doesn’t rely on manually following a huge document (and accidentally missing a step.)
We want to make deployments simple and less scary. We want to test deployments across multiple environments that mimic production as closely as possible, so that you’re more confident that a release will go well. If possible, we want to let DBAs and Ops Teams go home at normal hours and not worry about a huge scary release that must be deployed during late maintenance windows!
Sorry, that was a bit of a longer description. There’s just so much to say about DLM… I could go on!
Maybe just a little more. It sounds like DLM is actually very similar to ALM. However, at a more detailed level, thinking of ALM as an arch spanning from the very first vision right up until the retirement of your product, including requirements gathering, planning, development and so on… where do you think DLM should be placed in the big world of Software Development? On which interfaces can it be integrated into our existing ALM processes?
DLM is focused on the database and bringing it into your ALM processes. I think if your application relies on a backend database, then DLM needs to go hand in hand with ALM. Therefore, it integrates into every interface where changes to the code base are either decided or implemented (from requirements gathering through to monitoring). Our DLM tools integrate with a lot of tools that exist for ALM. For database source control, we integrate with Subversion, Team Foundation Server, Git, Mercurial, and more. For database continuous integration, we integrate with Team City, Microsoft Team Build, and more. For releases, we integrate into Octopus Deploy, Microsoft Release Management, and other release management tools. Our thinking is that you already have tools for your application. We want to leverage these and bring your database into these tools and processes to make your deployments easier.
Database deployments are hard and we think it’s not often talked about when people talk about ALM, but it needs to be. Your application isn’t going to work very well if the database doesn’t exist or if it’s not at the right version that your application needs. The database is hard because of the data that needs to persist and also because of security concerns around this data.
You cannot lose data during a deployment. You have to worry about current connections and data consistency. You also have to worry about the security of that data. In a lot of areas, developers may not be able to have a copy of production data because of privacy issues. This makes it slightly harder because you’re testing scripts against something that is similar to production, but not production, so anything could happen when it comes time for the production release.
It’s really great if you can have a staging environment that matches production exactly, so that upgrades can be run and verified there before applying them to production. You also need to make sure that no changes were made to production between when you tested the deployment on staging and when you’re ready to run it on production. We describe a change to a database from an expected state as “drift”. There are many reasons this can happen and some are very valid, such as a hotfix to production. You need to know about this drift so you can incorporate it back into your development environment, making sure it doesn’t get overwritten the next time you do a deployment.
By the way, we have a brand new tool called SQL Lighthouse that provides a great dashboard of your database environments, and can alert you about changes and drift. This tool is currently a free beta and we’re looking for people to try it out and give us feedback before the full release. We have a lot of great ideas on our backlog for this tool that I’m really excited about. It would be great to hear from your readers and get their thoughts on the beta and what else they would like to see it do.
So anyway, there’s a lot to think about when it comes to the database. You need to think about server configurations, schema, and reference data. Reference data drives the application, so it needs to be versioned alongside the application and database schema and should be released in the same process. You also need a plan for what happens if a deployment doesn’t go as expected. This could be restoring a backup, having a rollback script, reverting a snapshot, or even rolling forward a “fix,” especially if you have a great process in place that makes releasing the fix easy.
Wow – that´s pretty extensive. What’s the simplest thing readers could do immediately to improve their ALM / DLM processes?
The first step in DLM is getting your database in source control. There’s most likely a source control system that your application developers are using. Find out where the related application code is stored and put your database code there. (Yes, database is code.) You also want to source control any configurations and reference data that should be tracked and deployed alongside the schema as well.
Ok, but my application developers are using Microsoft’s Team Foundation Server, and my Database developer uses SQL Server Management Studio. How can these environments fit together?
They actually fit together really nicely. Redgate’s SQL Source Control is a plug in to SSMS that integrates with existing source control systems like TFS. It allows you to continue to work on your database in SSMS, and to commit those changes to TFS. When you commit, you can associate the changes to a TFS work item ID, which could represent a user story or a bug. This is great because then you can see why the change was made and any other changes that were associated with the work item in one place.
Once your database is in source control, then you can use other TFS features like TFS Team Build and Redgate SQL CI to automate your builds and even do continuous integration for both your application and your database. The next step is putting database unit tests into this process to give you fast feedback about the state of your database and also provide solid regression testing as you’re making changes to the database.
Finally, TFS Team Build can kick off a release in Microsoft’s Release Management. Using a tool like Redgate SQL Release, this can deploy the database to other environments (like test or QA) to run additional tests. This could also include populating the database and running load and/or performance tests against the database automatically. How cool is that?
Impressive! We have all the tools integrated, and bringing the team one step closer together – that’s awesome! I’m just thinking of the possible performance boost for our projects, and it’s amazing. One last question for the endless debate: Can database development be agile? (hint: the answer has 3 letters )
Absolutely. Wait, that isn’t 3 letters…
To me, it seems like Database Development has to be agile to support an agile development process. You don’t want the database to be the bottleneck. You want it to be integrated into your development processes. You want it to be tested automatically to give you confidence in the changes and check for regressions. You want it to be as easy as possible to release the changes to production. You want your developers and DBAs to be spending time on things that are hard and truly valuable, not the things than can be reliably automated.
That said, it’s definitely tricky. Especially if multiple applications use the same database. Refactoring for one application can be a challenge because you want to make sure that the other applications still work, but loosely coupled applications and libraries are already facing similar challenges, so we should be able to learn from them. There may also be reports or data warehouses that need to be considered, which is fairly unique to databases.
So yeah, it is tricky, but definitely possible and something to work towards.
So, if you develop your application using an agile approach, you should consider developing your database using agile techniques as well. I think that’s a good conclusion to end our today’s talk. Thank you for your time, and looking forward to our next conversation!
Thank you so much! I really enjoyed our talk and hope your readers do too. I’m happy to talk more on twitter @SQLStephanie. Or, if you want to learn more about DLM, visit us at http://www.red-gate.com/products/dlm.