A Dive into the Future: My First Experience with GPT-4

GPT-4 in my hands…I don’t think I will sleep much tonight!

Introduction

As an AI enthusiast, I was beyond excited when OpenAI announced the release of GPT-4, the newest iteration of their ground-breaking language model. With a proven track record of pushing the boundaries of natural language processing, I knew I had to give GPT-4 a spin and share my experiences with you all. So, without further ado, let’s embark on a journey to explore the capabilities of GPT-4 and discover how this powerful tool has the potential to revolutionize the way we interact with technology.

Impressive Conversational Skills

One of GPT-4’s primary selling points is its conversational prowess, so I put it to the test by engaging in a simple Q&A session. To my amazement, the model provided detailed, informative, and coherent answers to my questions. I tested its knowledge on various topics, from historical events and scientific concepts to pop culture and current events. GPT-4 handled them all with ease, delivering responses that were human-like in their depth and accuracy.

Creative Writing and Storytelling

I wanted to see if GPT-4 could unleash its creative potential, so I asked it to generate a short story based on a simple prompt. The result was astonishing – the model crafted a captivating narrative with well-developed characters, a coherent plot, and a satisfying conclusion. The storytelling abilities of GPT-4 are leaps and bounds ahead of its predecessors, making it a valuable tool for writers in need of inspiration or assistance in crafting their stories.

Coding Assistance

As a developer, I was particularly curious about GPT-4’s ability to generate and understand code. I asked the model to help me with some Python and JavaScript snippets, and it provided accurate, functional code in both cases. This feature has the potential to revolutionize the way we approach programming tasks, making it easier for developers to solve complex problems and streamline their workflows.

Limitations and Ethical Considerations

Despite its incredible abilities, GPT-4 is not without limitations. It occasionally provided inaccurate information or misunderstood the context of my questions. However, these instances were relatively rare compared to the overall quality of its responses. Additionally, it’s essential to consider the ethical implications of using AI systems like GPT-4, including issues like potential biases in training data, malicious use, and user privacy.

Conclusion

My experience with GPT-4 has been nothing short of remarkable. Its conversational skills, creative writing abilities, and coding assistance are game-changers in the realm of AI and natural language processing. While it’s essential to acknowledge its limitations and approach its use with ethical considerations in mind, GPT-4 offers immense potential for revolutionizing the way we interact with technology. As an AI enthusiast, I’m thrilled to witness the continued evolution of these groundbreaking models and can’t wait to see what the future holds for AI-powered language systems.

#Mox #gpt4

ChatGPT monitored Microsoft Teams chats

Use ChatGPT to automatically update your team chat messages for clarity.

Use ChatGPT to update chat messages to boost clarity!

As we continue to work remotely in the wake of Covid, communication through text is more common than before. With messages flying back and forth, there is a chance of misinterpretation, which can cause serious issues. Messages can easily be misinterpreted, assigning incorrect intent or meaning. To minimize this risk , I created a Power Automate that checks my Teams messages for any risk of misinterpretation.

To achieve this, I used the newly released ChatGPT API. ChatGPT is a large language model trained by OpenAI to understand natural language and generate human-like responses. By leveraging ChatGPT’s capabilities, I created a Power Automate that analyzes the text of the messages and checks for any potential misunderstandings. If there was content in my chat message deemed to include ambiguous content, too many alternative meanings, or obvious spelling mistakes – then the Power Automate flow would update my Teams chat message.

To accomplish this, I used;

  • A custom connector to OpenAI ChatGPT API
  • A custom connector to Microsoft Graph API
  • A Power Automate flow

The first step was to set up the Power Automate flow. I started by connecting it to my Microsoft Teams account and setting up a trigger that detects new messages. To not check all of them during proof-of-concept, I have added a condition that it will only check messages where I include a code. I put # start of the message as the trigger to check my text. Then, I added the action to get the text of the message. Next, I created an action to call the ChatGPT API through my custom connector. I passed in the text of the message as input and requested an analysis of the message and added the instruction for ChatGPT, so it was clear to know what to do with the text content. The API then returned a response with the analysis, including any potential risks of misinterpretation and a suggested rewrite. Finally, I added a condition to check if the analysis indicated any risks of misinterpretation. I let ChatGPT rate it between 1-100 and along the way I found that above 75 was a good rating threshold for me to have ChatGPT do rewrites. I also made sure to set the temperature parameter low to minimize the creativity in the responses – making them more predictable.

Sending the teams chat message to ChatGPT API with an instruction prompt.

‘DevOps’ + ‘ChatGPT/AI’ == ‘TRUE’

OpenAI’s GPT models are a powerful tool that can be used in many different applications. But one of the areas where it made the most sense to me was within the DevOps platform. For those who doesn’t know what DevOps is – it can shortly be defined as a combination of software development and IT operations that aims to automate and streamline the software delivery process.

In this blog post, we will explore how to exploit OpenAI GPT models within DevOps to improve the software development process.

What are OpenAI GPT models?

OpenAI GPT (Generative Pre-trained Transformer) models are machine learning models that use deep learning techniques to generate natural language. These models are trained on large datasets of text and can generate human-like responses to prompts.

The GPT models are pre-trained on large amounts of text and can be fine-tuned on specific tasks. This allows the models to generate natural language responses to specific prompts, such as questions or requests for information.

How can OpenAI GPT models be used in DevOps?

One way to use OpenAI GPT models in DevOps is to improve the software development process. Here are a few ways that GPT models can be exploited within DevOps:

  1. Code commenting and documentation quality analysis: Use GPT-3 to analyze the quality of comments in the codebase and suggest improvements to make them clearer, concise, and informative. Also use GPT-3 to analyze the quality of the documentation in wiki. This process helps ensure that the documentation accurately reflects the functionality of the code and the user stories.
  2. Test automation: Use GPT-3 to automatically generate test automation scripts based on natural language descriptions of the desired tests. This process helps streamline the testing process by automating the generation of test scripts and reducing the need for manual testing.
  3. Code style enforcement: Use GPT-3 to enforce a consistent code style across the codebase by suggesting corrections or reformatting the code. This process helps ensure that the codebase follows consistent formatting and style guidelines, making it easier to read and maintain.
  4. Project estimation: Use GPT-3 to estimate the time and resources required to complete a project based on natural language descriptions of the requirements and constraints. This process helps ensure accurate project planning and resource allocation.
  5. Code standardization: Use GPT-3 to standardize the codebase by suggesting common programming practices, coding standards, and design patterns. This process helps ensure that the codebase follows consistent coding practices and design patterns, making it easier to read and maintain.
  6. Improving commit messages: Use GPT-3 to suggest better commit messages based on the changes made to the code. This process helps ensure that commit messages accurately reflect the changes made to the codebase.
  7. Enhancing natural language search: Use GPT-3 to analyze code and generate descriptions, tags for functions and classes to improve your DevOps platform’s natural language search capabilities. This process helps improve the discoverability of code and makes it easier to find specific code snippets.
  8. Code generation: Use GPT-3 to generate code snippets based on natural language prompts from user stories and acceptance criteria. This process helps automate the code writing process, reducing the need for manual coding.
  9. Automated testing: Use GPT-3 to generate test cases based on the code and user stories, which can then be automatically executed to test your code. This process speeds up the test case automation 10x.
  10. Project management: Use GPT-3 to generate reports and dashboards based on project data. This process helps automate project reporting, ensuring accurate and timely reporting.
  11. Natural language interface: Use GPT-3 to create a natural language interface for your DevOps platform. This process helps improve the platform’s usability by allowing users to interact with it using natural language commands. For example, ask DevOps to create a user story to resolve the feature gap from bug 123 or ask it to check which pipelines are currently running in the organization.

Anyone of these warrants its blog post on its own, but this blog post is more about the art of possible than the in-depth possibilities.

Implementation takes forever?!

You might think it takes a huge effort to accomplish any of these features within Azure DevOps. This is, of course, not true. You can easily start with your proof-of-concept by copying DevOps text into any GPT-based chat application such as ChatGPT or using the GPT-Playgrounds available on, for example, the OpenAI website.

A few examples using a manual approach

GPT3 as the Azure DevOps API where the prompt is the requirement and the output is an api call you can copy and paste into Postman:

GPT3 as Business Consultant writing User Stories where the prompt is the requirement description from customer workshop:

This is not the best long-term solution, but it will quickly give you an idea of the feasibility of using this approach.

Integrating GPT-3 programmatically

More automatic ways would be integrating GPT-3 directly into Azure DevOps as either Azure DevOps extensions, Logic Apps, Power Automate flows, or Azure functions.

I prefer to exploit the GPT model using Azure OpenAI and reuse the API/Custom connector for multiple DevOps organizations or any other application requiring similar service. This also gives me the ability to manage everything on one platform. But it works as well using the OpenAI APIs directly. There is also a cost perspective where the prices differ between the two and are changing.

Conclusion

OpenAI GPT models are a powerful tool that can be exploited within DevOps to improve the software development process. Whether it’s automating customer support, generating code, automated testing, or documentation, GPT models can save time and improve the overall quality of the software. I don’t foresee a future without using GPT-3 or similar models to leverage the work we do – but I equally have no clue how much our DevOps engagement will change. Only time will tell, but I am sure we have just seen the beginning of it all.

If you have other cool ideas on using GPT-3 within your DevOps process, please let me know!

Coding viagra for oldtimers

Over the years, I’ve developed a lot of programming skills through various projects and assignments. However, as time passed, some of these skills became less useful or outdated, and I found myself struggling to find new and exciting ways to stay engaged with coding.

That’s when I discovered GitHub Copilot, an AI-powered code completion tool developed by OpenAI and GitHub. Using machine learning algorithms, Copilot is designed to help developers write code more efficiently, by suggesting lines of code and even entire functions based on the context of the code being written.

At first, I was skeptical about using an AI-powered tool to help me write code. I felt like it might take away from the creative aspect of coding, or make me feel like I wasn’t really doing the work. But after using Copilot for a few weeks, I quickly realized that it was a game-changer.

The AI models that power Copilot are incredibly accurate, and they seem to have an almost intuitive understanding of the code I’m trying to write. As I type, Copilot provides helpful suggestions and code snippets that save me time and frustration. It’s like having a programming partner who can anticipate my every move and help me write code faster and more efficiently.

But what I love most about using Copilot is how it’s reignited my joy for coding. It’s allowed me to focus on the parts of programming that I truly enjoy, such as problem-solving, rather than getting bogged down in the nitty-gritty details of syntax and formatting. With Copilot taking care of the grunt work, I can focus on the creative aspects of coding, such as designing algorithms and exploring new approaches.

Of course, Copilot is not a magic bullet that can solve all of my coding problems. There are times when it makes mistakes or suggests code that doesn’t quite fit what I’m trying to do. But overall, the benefits of using Copilot have been overwhelming , and it has boosted my confidence and helped me stay engaged with coding in new and exciting ways. Nowadays when I think of designs I often turn to AI and Copilot to quickly build that proof of concept required for me to feel confident about my designs.

In conclusion, if you’re a developer who’s struggling to find new ways to stay engaged with coding, I highly recommend giving GitHub Copilot a try. This tool has been a game-changer for me, it revamped my use for coding and I’m excited to see how it continues to evolve and help other developers find joy in their work.

ChatGPT Plus – Turbo mode!

Summary of below blogpost by Turbo –
The author of this text writes about their experience with ChatGPT and its premium version, Turbo. They discuss the differences between the various models available on Azure OpenAI and mention that they have found the speed of Turbo to be much faster than the standard version. The author also highlights how ChatGPT has become an indispensable tool for them in their daily life for tasks such as summarizing large texts, learning new things, and translating texts. They also recommend using the Chrome voice control extension for an even more convenient experience. In conclusion, the author encourages businesses to invest in OpenAI chatgpt services as they believe that AI will play a crucial role in the future.

ChatGPT has had issues with accessibility. No figure, given the mindblowing 100 millions users after 2 months.

But fear not, salvation is here! I acquired my premium account with Chatgpt for a measly 25 USD per month. This gives me access to faster response and more uptime (hopefully).

Now what is truly the difference? Well….let us ask ChatGPT Turbo 🙂

First attempt…ouch! Are there already 100 Million premium users?

My first attempt of asking Turbo was a failure. I think that this is intermitten given that Turbo is just out and I would think everyone is testing at once.

My second attempt was more fruitful.

This statement from Turbo I believe is simply not true?! But nevertheless we can see some differences in the url used to access turbo.

As we are aware, there are multiple models. Let us briefly have a look at which models we can easily access and the differences.

In your Azure OpenAI we have access to the following models when we want to build AI induced applications.

We can also get some more information either in OpenAI official documentation or in Microsoft Docs.

https://learn.microsoft.com/en-us/azure/cognitive-services/openai/concepts/models#gpt-3-models

Regardless of the models used in which version – I can already now say the speed is many folds in comparison to the standard, but I have no actual proof it is providing more nuanced responses. From what I see using the standard chat page – I can’t adjust any parameters. Which I have not as of yet needed. I simply ask ChatGPT to adjust its answer according to temperature and max tokens, where the temperature is creativity, and max tokens is the length of the response from chatgpt.

I have in addition to the standard page built my own apps to access the various OpenAI API’s but as of now my default homepage is https://chat.openai.com/chat, and this is where I go 9 times out of 10.

ChatGPT has become my number-one place for the following daily tasks;

  • Summarize large texts.
    I simply don’t always have the time to read long texts. I use summaries to help me get information quickly or as a teaser to actually prioritize time to read the text in its entirety.
  • Learn new things
    Getting responses and teaching me in a dialogue fashion instead of learning texts from standard documentation has helped me greatly. Oftentimes I learn new things by first trying to grasp the holistic essence of a concept before I spend more time drilling down into the details.
  • Text translations
    I work with multiple languages daily and documenting and understanding text is important. My go-to language is English for documentation, but I do document in other languages every now and then, and the translation of longer texts with GPT3.5-based models has in my experience proven vastly superior over Google.

Another tip from me is to use the chrome voice control extension. It supports many languages. It allows me to talk to chatgpt without using my keyboard. This has proven very helpful and raised some eyebrows on the train to work!

My recommendation to all businesses out there who care to power up their colleagues – is to allow the cost of OpenAI chatgpt plus services to become deductible. It has been said before and let us say it again – AI will not replace you in the near future, but the people who use it will!

Happy ChatGPTying

Disclaimer: This blogpost was partially written by TURBO!

R.I.P LCS – long live PPAC

The recent Dynamics 365 FastTrack TechTalks topic was Power Platform Admin Center Integration with LCS. The integration has existed for some time, and development is being done. But the major takeaway is the future existence of admin tools within LCS to deploy and maintain Dynamics environments with the current One Dynamics Admin platform strategy. Not far into the session following statement was proclaimed.

Move everything so that we have one unified admin experience

In a nutshell what this means is that in 2-3 years we will see a change where no new Dynamics environments are deployed in LCS – but all deployment and maintenance is purely done from PPAC. Where Dynamics 365 Finance is an app in a shared powerplatform environment – rather than its own standalone environment.

One Dynamics – One Platform

If you wish to view the entire sessions, please check out below urls.

https://community.dynamics.com/365/dynamics-365-fasttrack/b/techtalks/posts/one-admin—lcs-ppac-integration-december-7-2022

One customer journey – One Solution – Thousands of connectors.

Most Connectors win!

Microsoft announced recently that they surpassed 1000 connectors. This might not mean much to many, but successful digital transformation projects are oftentimes riddled with non-working integrations, especially during transitional architectures. To have access to vendor-supported cross-application integration connectors is oftentimes not only a selling point long-term but also what is required to meet the demand of time to market. No one around today wants to invest in their own bespoke integrations unless it is necessary.

Dynamics 365 and the rest of the Microsoft Application stack connect today to any known application. Either by an application-specific connector or via a more generic connector.

What is a connector?

There are heaps and heaps again of different types of connectors. But within Microsoft, we commonly refer to the Azure or power platform connectors.

The integration of Microsoft Dynamics 365 with connectors provides businesses with a powerful platform to automate their business processes, streamline their workflows, and enhance their overall productivity. The combination of Dynamics 365 and connectors allows businesses to seamlessly integrate their cloud-based and on-premise applications and enables data exchange between these systems in real time.

What are Azure Connectors?

Azure Connectors are a set of pre-built, managed, and configurable services that provide a fast and secure way to integrate applications and services. These connectors provide an easy-to-use interface for exchanging data between systems, and are designed to work seamlessly with Azure services and other cloud-based and on-premise applications. With Azure Connectors, businesses can eliminate the need for custom integration code and reduce the time required to integrate systems, which ultimately leads to increased efficiency and cost savings.

How do Azure Connectors work with Dynamics 365?

Azure Connectors provide a bridge between Dynamics 365 and other applications and services, enabling the exchange of data in real-time. The connectors can be used to integrate Dynamics 365 with a variety of applications, including SharePoint, OneDrive, Excel, and many more. With the use of Azure connectors, businesses can easily connect Dynamics 365 to their existing applications and services, allowing them to automate workflows, streamline processes, and enhance the overall productivity of their organization.

What are the benefits of using Azure Connectors with Dynamics 365?

  1. Increased Efficiency: Integrating Dynamics 365 with Azure Connectors eliminates the need for manual data entry and reduces the time required to complete tasks. This increased efficiency enables businesses to focus on more important business activities, leading to enhanced productivity and cost savings.
  2. Improved Data Quality: Azure Connectors provide a secure and reliable way to exchange data between systems, ensuring that data is accurate and up-to-date. This improved data quality provides businesses with a single source of truth, which can be used to make informed business decisions.
  3. Enhanced Collaboration: Integrating Dynamics 365 with Azure Connectors enables businesses to collaborate more effectively by providing access to real-time data from multiple systems. This enhanced collaboration leads to improved teamwork and better decision-making.
  4. Customizable Workflows: Azure Connectors allow businesses to automate workflows, streamline processes, and enhance the overall efficiency of their organization. The connectors can be configured to meet the specific needs of each business, providing a tailored solution optimized for their unique requirements.
  5. Enterprise Support: Whenever you need a supporting hand – Microsoft has 24/7 support for all connectors.
  6. Monitoring: Built-in monitoring is a click away and with no extra costs. Get alerts and track trends in the usage of your connectors.
  7. Robust: Built-in robustness in retry and throttling of connectors exist out-of-box.

Conclusion

In conclusion, integrating Microsoft Dynamics 365 with Azure Connectors gives businesses a powerful platform to automate their business processes, streamline their workflows, and enhance their overall productivity. With the use of Azure Connectors, businesses can easily connect Dynamics 365 to their existing applications and services, allowing them to automate workflows, streamline processes, and enhance the overall efficiency of their organization. The benefits of using Azure Connectors with Dynamics 365 include increased efficiency, improved data quality, enhanced collaboration, and customizable workflows. We can do all this for a fraction of the cost and with full support from Microsoft.

Become an expert using these connectors, and you will be a valuable resource going forward in any size digital transformation project!

A morning like any other morning!

Waking up at 6.17 AM was not by accident. My personAId Xris (pronounced Chris) awoke me at the end of my rem cycle, knowing that I would still have enough time for my obligations this regular Tuesday.

I frown briefly thinking of the nanosec pricing for the publicly accessible quantum computing services. Gathering daily insights from my 17 petabytes of demeanor data (all my past data interactions) costs about 3,23 nanosecs. Xris requires these ongoing data analytics to have the insights needed to act as my assistant and business partner.

Tuesday has physical exercise first on the agenda. I put my AR glasses on and start running down the familiar island trail. Conversations from ongoing projects are hovering transparently and steadily in front of my field of vision. I whisper commands (voice commands only audible to Xris) to give me the necessary updates.

I get a call while listening to Xris highlighting the sentiments from today’s conversations and essential excerpts. The priority call from my manager is allowed directly to me without being greeted by my conversational bot. I have been running for 20 minutes, so Xris modulates my voice to disguise the shortness of breath.

I’m asked to join an urgent meeting to discuss some last-minute project changes. I realized 5 minutes into the meeting that I was merely there as a listener and did not have any input to share, so I switched to bot-mode despite company policy. The steady increase of non-physical meeting participation using bots as stand-ins has encouraged many enterprises to enforce meeting ethics disallowing unannounced botmodes.

In bot-mode, Xris can track conversation and sentiments to foresee possible required participation on my part. I would get an early warning and a recap before rejoining the meeting. Instead, I get into the shower listening to the last episode of my favorite podcast.

This might all sound like sci-fi, but all technology envisioned in previous paragraphs, are in fact, very much a reality. But not yet part of mainstream applications.

My ambition was to get you thinking about the art of possible. The only way for this to ever become applications sought after is by implementing them into our next project and it all starts in your mind.

You don’t need to be a large enterprise to utilize the latest conversational intelligence IVR solutions from #Nuance with your #Dynamics365 Customer Service application. Nor does it require a hundred plus customer service agent organizations to reach break-even enabling virtual assistants.

We might not get Xris today, but we do get an experience that will allow organizations to spend their money on development saved from the reduced handling costs covered by the self-services that virtual assistants provide.

For all geeks out there like me, check out Microsoft’s development framework investing in supporting virtual assistants in your environments. Why not make your next customer experience project, whether internal or customer-facing – include a virtual assistant in the teams channel answering all those reoccurring queries?

https://microsoft.github.io/botframework-solutions/index

https://docs.microsoft.com/en-us/microsoftteams/platform/samples/virtual-assistant

Hoping to see many examples and ideas in the upcoming #MicrosoftBuild

Don’t forget to register!

https://register.build.microsoft.com/

#theartofpossible

D365 CE + FO + Large transactional volumes

When integrating Dynamics 365 CE (Customer Engagement) Platform which is based on Microsoft Dataverse with D365 FO (Finance and Operations) we have a few options.

The suggested Microsoft method is to use Dual-write if we require synchronous replication of data between those two applications. Dual-write has a lot of out-of-box capabilities two accomplish a functioning link between D365 CE and FO – but in my experience, this works only well where the use case does not involve a large number of transactions over a short time frame.

The integration templates include a lot of different tables and there are many standard integrations out-of-box for both D365 Sales and Field Service. But as soon as you wish to adjust and add then you will immediately hit various challenges.

You will learn that the Dual write is still a very immature product and has still bugs that you will need to understand and create workarounds for. You will realize that the GUI has limitations and you’ll learn how to deal with those. You will appreciate that many of the D365 FO entities do not support Dualwrite because it lacks the possibility to access data via the odata web API.

But when you finally overcome all these teething issues I believe Dual-Write is struggling with – then you need to appreciate the following do’s and do not’s that Microsoft has manifested.

Out from our previous enterprise definition – we would not see many use cases fitting the above list.

Especially you would most often have multiple instances for D365 CE.

But even if you do fulfill the above list you’d soon hit a wall in terms of large volumes.

Not because dual write has issues dealing with large volumes – but for the fact that I would always try to avoid synchronous solutions for large volumes. I’d always try to design large transactional flows using a decoupled model. There are multiple scenarios where data can fail to (CRUD) create, read, updated, or deleted. We need to take this into account and have ways to deal with these failures and allow the users to make guided decisions to rectify data and proceed.

To analyze and rectify issues with data synch using dual write is truly a technical administrative task.

In addition, we need to consider the API service protection limits with Dynamics 365 CE. Dual-write does not consider other sources or logic for changing the same data and therefore it is a pain to have centralized control of all the CRUDs.

So still as of today 29th April 2022 – I would use other technologies than dual write in any given enterprise scenario.

Dynamics 365 CE 99.9% Enterprise Support

As the title suggests, when dealing with D365 CE deployment projects, we expect it to fit organizational requirements to close to 99.9%.

No I am not talking about the SLA from december 2021.

I’m trying to quantify the required features for admin, developers, and users we would expect in enterprise scenarios. Maybe not exactly that amount – and to be fair I haven’t done the calculations. But my point is that there are a few important things one needs to consider running an enterprise-scale D365 CE project. Let’s start from the top.

What do we consider being Enterprise. As a enterprise architect I would say it is the governing realm of all business processes, people, solutions and data we require to deliver a specified set of services or products to a given market. In essence this is either a company, group of companies or a business unit within a company.

But for this article, the enterprise also emphasizes large volumes of transactions, traceability requirements, risk mitigation, scalability, Application Lifecycle support, etc. The list goes on, but all refer to the variables that occur within companies doing extensive scale activities, combining many people across many systems and solutions.

For this article, I have focused on development and large amounts of transactions being sent to and from D365 Customer Engagement.

Firstly the requirement of dealing with large amounts of transactions.

It is not unusual for large organization to deal with 100 000s of transactions daily. Transactions ranging from monetary i.e. sales orders to IT administrational such as audit logs.

Usually, you’d have a requirement of ensuring 100% delivery of these transactions. Regardless if it is receiving or sending. To accomplish this, you need to make sure you always ensure transactions are routed and created correctly. There are many ways of achieving just that. But in simple terms – we want to avoid failure scenarios causing data loss. For the D365 Customer Engagement platform – apart from apparent bugs in logic, these failure scenarios are most commonly caused by high-level data mismatches. For example, if you use incorrect parameters in lookup fields, causing records to fail to be created.
But ever so common, you also hit design limits in your technology. The most common technology used for transacting data in and out of the Dynamics 365 Customer Engagement platform is the standard API provided by Microsoft. These service protection limits are there to safeguard the Dynamics 365 CE platform from misuse – ultimately rendering the entire service unusable.

https://docs.microsoft.com/en-us/power-apps/developer/data-platform/api-limits

The limits have changed over time but currently it is limited to one user performing 8000 service calls to the API within a 5 minute sliding window. If we were to misuse this – the service responds with a 429 error.

Naturally, Microsoft offers many ways to deal with this, and most of them will eventually provide a solution. You can add more users, segment the calls into batches, throttle the speed of calls to the API, etc. But essentially, you need to be proactive! Microsoft will not do this for you – this is something you need to plan for developing your solutions and logic.

So is this enough? Are we there? Sorry – we aren’t.

On top of all our efforts, we also need to understand that we are not alone in providing our service as a business consultant using the Microsoft platform. Essentially we are partnering with Microsoft to deliver the service. Using cloud services means Microsoft employs most of your applications and infrastructure specialists. We also need to appreciate that services in the cloud are also hardware somewhere that needs to be correctly tuned to fit our requirements. The latter isn’t always plug n play!

I often see services such as Azure logic apps or Dynamics 365 CE API stop responding. Logic calls the API and dataverse responds with 400.


In a recent project, we had these issues 1-2 times out of 10 000 records. It is enough to cause grave problems for an enterprise and must be dealt with. Contacting Microsoft support, you’ll learn that the only fix is to adjust the resources in their backend. So nothing you can control in advance unless you take your chances and lower the rate of transactions per minute. But by doing so – you’d never know and control other areas of logic calling the same service. So my advice to deal with these types of issues…..always do stress testing….so you do not need to stress.

Further on to the requirements gap for delivering a functioning ALM with Dynamics 365 Customer Engagement.

I am a true believer in devops and I use Azure DevOps extensively and sometimes it is the only place for all my project activites and documentation. In my previous blog article I wrote about manual intervention. It is what I use in my releases via ADO pipelines in order to complete the set of activities required where D365 CE falls short of providing programmtically ways of changing or applying logic to the D365 CE environment. Below I would like to address those activities that I wish had better ways to manipulate using codes or scripts.

Fiscal Year – required setting for most companies but not possible to set either via the API or powershell. You have to go somewhere and press a few buttons.

App feature settings such as Export to PDF etc – you have to logon to the app and choose entities used for pdt exports.

Turn of preview features such as enhanced product experience must be done manually.

Almost all settings in powerportal admin center such as dataverse search on, audit log settings.

Register webhooks via the plug in registration tool must be done manually.

So unfortunately as of today – there are a few things hindering us from delivering a complete solution offering zero touch ALM processes dealing with the Dynamics 365 Customer Engagement.