‘DevOps’ + ‘ChatGPT/AI’ == ‘TRUE’

OpenAI’s GPT models are a powerful tool that can be used in many different applications. But one of the areas where it made the most sense to me was within the DevOps platform. For those who doesn’t know what DevOps is – it can shortly be defined as a combination of software development and IT operations that aims to automate and streamline the software delivery process.

In this blog post, we will explore how to exploit OpenAI GPT models within DevOps to improve the software development process.

What are OpenAI GPT models?

OpenAI GPT (Generative Pre-trained Transformer) models are machine learning models that use deep learning techniques to generate natural language. These models are trained on large datasets of text and can generate human-like responses to prompts.

The GPT models are pre-trained on large amounts of text and can be fine-tuned on specific tasks. This allows the models to generate natural language responses to specific prompts, such as questions or requests for information.

How can OpenAI GPT models be used in DevOps?

One way to use OpenAI GPT models in DevOps is to improve the software development process. Here are a few ways that GPT models can be exploited within DevOps:

  1. Code commenting and documentation quality analysis: Use GPT-3 to analyze the quality of comments in the codebase and suggest improvements to make them clearer, concise, and informative. Also use GPT-3 to analyze the quality of the documentation in wiki. This process helps ensure that the documentation accurately reflects the functionality of the code and the user stories.
  2. Test automation: Use GPT-3 to automatically generate test automation scripts based on natural language descriptions of the desired tests. This process helps streamline the testing process by automating the generation of test scripts and reducing the need for manual testing.
  3. Code style enforcement: Use GPT-3 to enforce a consistent code style across the codebase by suggesting corrections or reformatting the code. This process helps ensure that the codebase follows consistent formatting and style guidelines, making it easier to read and maintain.
  4. Project estimation: Use GPT-3 to estimate the time and resources required to complete a project based on natural language descriptions of the requirements and constraints. This process helps ensure accurate project planning and resource allocation.
  5. Code standardization: Use GPT-3 to standardize the codebase by suggesting common programming practices, coding standards, and design patterns. This process helps ensure that the codebase follows consistent coding practices and design patterns, making it easier to read and maintain.
  6. Improving commit messages: Use GPT-3 to suggest better commit messages based on the changes made to the code. This process helps ensure that commit messages accurately reflect the changes made to the codebase.
  7. Enhancing natural language search: Use GPT-3 to analyze code and generate descriptions, tags for functions and classes to improve your DevOps platform’s natural language search capabilities. This process helps improve the discoverability of code and makes it easier to find specific code snippets.
  8. Code generation: Use GPT-3 to generate code snippets based on natural language prompts from user stories and acceptance criteria. This process helps automate the code writing process, reducing the need for manual coding.
  9. Automated testing: Use GPT-3 to generate test cases based on the code and user stories, which can then be automatically executed to test your code. This process speeds up the test case automation 10x.
  10. Project management: Use GPT-3 to generate reports and dashboards based on project data. This process helps automate project reporting, ensuring accurate and timely reporting.
  11. Natural language interface: Use GPT-3 to create a natural language interface for your DevOps platform. This process helps improve the platform’s usability by allowing users to interact with it using natural language commands. For example, ask DevOps to create a user story to resolve the feature gap from bug 123 or ask it to check which pipelines are currently running in the organization.

Anyone of these warrants its blog post on its own, but this blog post is more about the art of possible than the in-depth possibilities.

Implementation takes forever?!

You might think it takes a huge effort to accomplish any of these features within Azure DevOps. This is, of course, not true. You can easily start with your proof-of-concept by copying DevOps text into any GPT-based chat application such as ChatGPT or using the GPT-Playgrounds available on, for example, the OpenAI website.

A few examples using a manual approach

GPT3 as the Azure DevOps API where the prompt is the requirement and the output is an api call you can copy and paste into Postman:

GPT3 as Business Consultant writing User Stories where the prompt is the requirement description from customer workshop:

This is not the best long-term solution, but it will quickly give you an idea of the feasibility of using this approach.

Integrating GPT-3 programmatically

More automatic ways would be integrating GPT-3 directly into Azure DevOps as either Azure DevOps extensions, Logic Apps, Power Automate flows, or Azure functions.

I prefer to exploit the GPT model using Azure OpenAI and reuse the API/Custom connector for multiple DevOps organizations or any other application requiring similar service. This also gives me the ability to manage everything on one platform. But it works as well using the OpenAI APIs directly. There is also a cost perspective where the prices differ between the two and are changing.

Conclusion

OpenAI GPT models are a powerful tool that can be exploited within DevOps to improve the software development process. Whether it’s automating customer support, generating code, automated testing, or documentation, GPT models can save time and improve the overall quality of the software. I don’t foresee a future without using GPT-3 or similar models to leverage the work we do – but I equally have no clue how much our DevOps engagement will change. Only time will tell, but I am sure we have just seen the beginning of it all.

If you have other cool ideas on using GPT-3 within your DevOps process, please let me know!

Dynamics 365 CE 99.9% Enterprise Support

As the title suggests, when dealing with D365 CE deployment projects, we expect it to fit organizational requirements to close to 99.9%.

No I am not talking about the SLA from december 2021.

I’m trying to quantify the required features for admin, developers, and users we would expect in enterprise scenarios. Maybe not exactly that amount – and to be fair I haven’t done the calculations. But my point is that there are a few important things one needs to consider running an enterprise-scale D365 CE project. Let’s start from the top.

What do we consider being Enterprise. As a enterprise architect I would say it is the governing realm of all business processes, people, solutions and data we require to deliver a specified set of services or products to a given market. In essence this is either a company, group of companies or a business unit within a company.

But for this article, the enterprise also emphasizes large volumes of transactions, traceability requirements, risk mitigation, scalability, Application Lifecycle support, etc. The list goes on, but all refer to the variables that occur within companies doing extensive scale activities, combining many people across many systems and solutions.

For this article, I have focused on development and large amounts of transactions being sent to and from D365 Customer Engagement.

Firstly the requirement of dealing with large amounts of transactions.

It is not unusual for large organization to deal with 100 000s of transactions daily. Transactions ranging from monetary i.e. sales orders to IT administrational such as audit logs.

Usually, you’d have a requirement of ensuring 100% delivery of these transactions. Regardless if it is receiving or sending. To accomplish this, you need to make sure you always ensure transactions are routed and created correctly. There are many ways of achieving just that. But in simple terms – we want to avoid failure scenarios causing data loss. For the D365 Customer Engagement platform – apart from apparent bugs in logic, these failure scenarios are most commonly caused by high-level data mismatches. For example, if you use incorrect parameters in lookup fields, causing records to fail to be created.
But ever so common, you also hit design limits in your technology. The most common technology used for transacting data in and out of the Dynamics 365 Customer Engagement platform is the standard API provided by Microsoft. These service protection limits are there to safeguard the Dynamics 365 CE platform from misuse – ultimately rendering the entire service unusable.

https://docs.microsoft.com/en-us/power-apps/developer/data-platform/api-limits

The limits have changed over time but currently it is limited to one user performing 8000 service calls to the API within a 5 minute sliding window. If we were to misuse this – the service responds with a 429 error.

Naturally, Microsoft offers many ways to deal with this, and most of them will eventually provide a solution. You can add more users, segment the calls into batches, throttle the speed of calls to the API, etc. But essentially, you need to be proactive! Microsoft will not do this for you – this is something you need to plan for developing your solutions and logic.

So is this enough? Are we there? Sorry – we aren’t.

On top of all our efforts, we also need to understand that we are not alone in providing our service as a business consultant using the Microsoft platform. Essentially we are partnering with Microsoft to deliver the service. Using cloud services means Microsoft employs most of your applications and infrastructure specialists. We also need to appreciate that services in the cloud are also hardware somewhere that needs to be correctly tuned to fit our requirements. The latter isn’t always plug n play!

I often see services such as Azure logic apps or Dynamics 365 CE API stop responding. Logic calls the API and dataverse responds with 400.


In a recent project, we had these issues 1-2 times out of 10 000 records. It is enough to cause grave problems for an enterprise and must be dealt with. Contacting Microsoft support, you’ll learn that the only fix is to adjust the resources in their backend. So nothing you can control in advance unless you take your chances and lower the rate of transactions per minute. But by doing so – you’d never know and control other areas of logic calling the same service. So my advice to deal with these types of issues…..always do stress testing….so you do not need to stress.

Further on to the requirements gap for delivering a functioning ALM with Dynamics 365 Customer Engagement.

I am a true believer in devops and I use Azure DevOps extensively and sometimes it is the only place for all my project activites and documentation. In my previous blog article I wrote about manual intervention. It is what I use in my releases via ADO pipelines in order to complete the set of activities required where D365 CE falls short of providing programmtically ways of changing or applying logic to the D365 CE environment. Below I would like to address those activities that I wish had better ways to manipulate using codes or scripts.

Fiscal Year – required setting for most companies but not possible to set either via the API or powershell. You have to go somewhere and press a few buttons.

App feature settings such as Export to PDF etc – you have to logon to the app and choose entities used for pdt exports.

Turn of preview features such as enhanced product experience must be done manually.

Almost all settings in powerportal admin center such as dataverse search on, audit log settings.

Register webhooks via the plug in registration tool must be done manually.

So unfortunately as of today – there are a few things hindering us from delivering a complete solution offering zero touch ALM processes dealing with the Dynamics 365 Customer Engagement.

In DevOps we trust

In DevOps we trust – to not become fools in love!

Recently I was challenged as a solution architect to deliver a fairly large project with a fixed hard date for go live. The deliver was far from out of the box and required thousands of development hours. With that said I knew there were very little margin for errors in the various stage release processes. So, I decided to yet again revisit the possibility to deliver a full fledge CI/CD configuration via Azure DevOps for Dynamics 365 Sales and Azure services. The spoiler is that I might have a crush but I am yet not in love!

Now to why I still think we have some more land to cover before I am smiling through the whole sentence proclaiming the ALM possibilities with Dynamics 365 Sales.

First of my ambition was to avoid manual steps in the deployment process. This is in my view not obligatory in CI/CD but best practice. Adding manual steps into any process is similar to the famous “broken window” policy. The broken windows policy simply explains that crime could start with the overseeing the simplest degradation in society. First it is a broken window…which ultimately could lead to a chain of events allowing the culprit to nestle its way into a controlled environment and do harm. Same goes for many processes and ALM is no different. Which leads me to my first problem with the current ALM possibilities and Dynamics 365 Sales.

It is not possible to automate all parts of Dynamics 365 Sales programatically!

But all is not lost of course. There is a feature within Azure DevOps Pipeline called manual interventions. This is by far not a new feature and not something unique for Azure DevOps.

Please reference to Microsoft Docs here

https://docs.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-intervention?view=azure-devops

Basically, what it does is pausing your current pipeline flow and let you continue automation after you have manually deployed or made changes outside the automation within the Pipeline. This is in my view ingenious and simple at the same time. They usually go hand in hand.

This merge of manual and automation is something oftentimes missing in process-oriented solution systems.

I would love to see this type of behavioral input possibilities in other systems without requiring development and customization.

So to summarize – Dynamics 365 Sales + Azure Devops + Manual Interventions = almost complete ALM 😊