A morning like any other morning!

Waking up at 6.17 AM was not by accident. My personAId Xris (pronounced Chris) awoke me at the end of my rem cycle, knowing that I would still have enough time for my obligations this regular Tuesday.

I frown briefly thinking of the nanosec pricing for the publicly accessible quantum computing services. Gathering daily insights from my 17 petabytes of demeanor data (all my past data interactions) costs about 3,23 nanosecs. Xris requires these ongoing data analytics to have the insights needed to act as my assistant and business partner.

Tuesday has physical exercise first on the agenda. I put my AR glasses on and start running down the familiar island trail. Conversations from ongoing projects are hovering transparently and steadily in front of my field of vision. I whisper commands (voice commands only audible to Xris) to give me the necessary updates.

I get a call while listening to Xris highlighting the sentiments from today’s conversations and essential excerpts. The priority call from my manager is allowed directly to me without being greeted by my conversational bot. I have been running for 20 minutes, so Xris modulates my voice to disguise the shortness of breath.

I’m asked to join an urgent meeting to discuss some last-minute project changes. I realized 5 minutes into the meeting that I was merely there as a listener and did not have any input to share, so I switched to bot-mode despite company policy. The steady increase of non-physical meeting participation using bots as stand-ins has encouraged many enterprises to enforce meeting ethics disallowing unannounced botmodes.

In bot-mode, Xris can track conversation and sentiments to foresee possible required participation on my part. I would get an early warning and a recap before rejoining the meeting. Instead, I get into the shower listening to the last episode of my favorite podcast.

This might all sound like sci-fi, but all technology envisioned in previous paragraphs, are in fact, very much a reality. But not yet part of mainstream applications.

My ambition was to get you thinking about the art of possible. The only way for this to ever become applications sought after is by implementing them into our next project and it all starts in your mind.

You don’t need to be a large enterprise to utilize the latest conversational intelligence IVR solutions from #Nuance with your #Dynamics365 Customer Service application. Nor does it require a hundred plus customer service agent organizations to reach break-even enabling virtual assistants.

We might not get Xris today, but we do get an experience that will allow organizations to spend their money on development saved from the reduced handling costs covered by the self-services that virtual assistants provide.

For all geeks out there like me, check out Microsoft’s development framework investing in supporting virtual assistants in your environments. Why not make your next customer experience project, whether internal or customer-facing – include a virtual assistant in the teams channel answering all those reoccurring queries?



Hoping to see many examples and ideas in the upcoming #MicrosoftBuild

Don’t forget to register!



D365 CE + FO + Large transactional volumes

When integrating Dynamics 365 CE (Customer Engagement) Platform which is based on Microsoft Dataverse with D365 FO (Finance and Operations) we have a few options.

The suggested Microsoft method is to use Dual-write if we require synchronous replication of data between those two applications. Dual-write has a lot of out-of-box capabilities two accomplish a functioning link between D365 CE and FO – but in my experience, this works only well where the use case does not involve a large number of transactions over a short time frame.

The integration templates include a lot of different tables and there are many standard integrations out-of-box for both D365 Sales and Field Service. But as soon as you wish to adjust and add then you will immediately hit various challenges.

You will learn that the Dual write is still a very immature product and has still bugs that you will need to understand and create workarounds for. You will realize that the GUI has limitations and you’ll learn how to deal with those. You will appreciate that many of the D365 FO entities do not support Dualwrite because it lacks the possibility to access data via the odata web API.

But when you finally overcome all these teething issues I believe Dual-Write is struggling with – then you need to appreciate the following do’s and do not’s that Microsoft has manifested.

Out from our previous enterprise definition – we would not see many use cases fitting the above list.

Especially you would most often have multiple instances for D365 CE.

But even if you do fulfill the above list you’d soon hit a wall in terms of large volumes.

Not because dual write has issues dealing with large volumes – but for the fact that I would always try to avoid synchronous solutions for large volumes. I’d always try to design large transactional flows using a decoupled model. There are multiple scenarios where data can fail to (CRUD) create, read, updated, or deleted. We need to take this into account and have ways to deal with these failures and allow the users to make guided decisions to rectify data and proceed.

To analyze and rectify issues with data synch using dual write is truly a technical administrative task.

In addition, we need to consider the API service protection limits with Dynamics 365 CE. Dual-write does not consider other sources or logic for changing the same data and therefore it is a pain to have centralized control of all the CRUDs.

So still as of today 29th April 2022 – I would use other technologies than dual write in any given enterprise scenario.

Dynamics 365 CE 99.9% Enterprise Support

As the title suggests, when dealing with D365 CE deployment projects, we expect it to fit organizational requirements to close to 99.9%.

No I am not talking about the SLA from december 2021.

I’m trying to quantify the required features for admin, developers, and users we would expect in enterprise scenarios. Maybe not exactly that amount – and to be fair I haven’t done the calculations. But my point is that there are a few important things one needs to consider running an enterprise-scale D365 CE project. Let’s start from the top.

What do we consider being Enterprise. As a enterprise architect I would say it is the governing realm of all business processes, people, solutions and data we require to deliver a specified set of services or products to a given market. In essence this is either a company, group of companies or a business unit within a company.

But for this article, the enterprise also emphasizes large volumes of transactions, traceability requirements, risk mitigation, scalability, Application Lifecycle support, etc. The list goes on, but all refer to the variables that occur within companies doing extensive scale activities, combining many people across many systems and solutions.

For this article, I have focused on development and large amounts of transactions being sent to and from D365 Customer Engagement.

Firstly the requirement of dealing with large amounts of transactions.

It is not unusual for large organization to deal with 100 000s of transactions daily. Transactions ranging from monetary i.e. sales orders to IT administrational such as audit logs.

Usually, you’d have a requirement of ensuring 100% delivery of these transactions. Regardless if it is receiving or sending. To accomplish this, you need to make sure you always ensure transactions are routed and created correctly. There are many ways of achieving just that. But in simple terms – we want to avoid failure scenarios causing data loss. For the D365 Customer Engagement platform – apart from apparent bugs in logic, these failure scenarios are most commonly caused by high-level data mismatches. For example, if you use incorrect parameters in lookup fields, causing records to fail to be created.
But ever so common, you also hit design limits in your technology. The most common technology used for transacting data in and out of the Dynamics 365 Customer Engagement platform is the standard API provided by Microsoft. These service protection limits are there to safeguard the Dynamics 365 CE platform from misuse – ultimately rendering the entire service unusable.


The limits have changed over time but currently it is limited to one user performing 8000 service calls to the API within a 5 minute sliding window. If we were to misuse this – the service responds with a 429 error.

Naturally, Microsoft offers many ways to deal with this, and most of them will eventually provide a solution. You can add more users, segment the calls into batches, throttle the speed of calls to the API, etc. But essentially, you need to be proactive! Microsoft will not do this for you – this is something you need to plan for developing your solutions and logic.

So is this enough? Are we there? Sorry – we aren’t.

On top of all our efforts, we also need to understand that we are not alone in providing our service as a business consultant using the Microsoft platform. Essentially we are partnering with Microsoft to deliver the service. Using cloud services means Microsoft employs most of your applications and infrastructure specialists. We also need to appreciate that services in the cloud are also hardware somewhere that needs to be correctly tuned to fit our requirements. The latter isn’t always plug n play!

I often see services such as Azure logic apps or Dynamics 365 CE API stop responding. Logic calls the API and dataverse responds with 400.

In a recent project, we had these issues 1-2 times out of 10 000 records. It is enough to cause grave problems for an enterprise and must be dealt with. Contacting Microsoft support, you’ll learn that the only fix is to adjust the resources in their backend. So nothing you can control in advance unless you take your chances and lower the rate of transactions per minute. But by doing so – you’d never know and control other areas of logic calling the same service. So my advice to deal with these types of issues…..always do stress testing….so you do not need to stress.

Further on to the requirements gap for delivering a functioning ALM with Dynamics 365 Customer Engagement.

I am a true believer in devops and I use Azure DevOps extensively and sometimes it is the only place for all my project activites and documentation. In my previous blog article I wrote about manual intervention. It is what I use in my releases via ADO pipelines in order to complete the set of activities required where D365 CE falls short of providing programmtically ways of changing or applying logic to the D365 CE environment. Below I would like to address those activities that I wish had better ways to manipulate using codes or scripts.

Fiscal Year – required setting for most companies but not possible to set either via the API or powershell. You have to go somewhere and press a few buttons.

App feature settings such as Export to PDF etc – you have to logon to the app and choose entities used for pdt exports.

Turn of preview features such as enhanced product experience must be done manually.

Almost all settings in powerportal admin center such as dataverse search on, audit log settings.

Register webhooks via the plug in registration tool must be done manually.

So unfortunately as of today – there are a few things hindering us from delivering a complete solution offering zero touch ALM processes dealing with the Dynamics 365 Customer Engagement.

In DevOps we trust

In DevOps we trust – to not become fools in love!

Recently I was challenged as a solution architect to deliver a fairly large project with a fixed hard date for go live. The deliver was far from out of the box and required thousands of development hours. With that said I knew there were very little margin for errors in the various stage release processes. So, I decided to yet again revisit the possibility to deliver a full fledge CI/CD configuration via Azure DevOps for Dynamics 365 Sales and Azure services. The spoiler is that I might have a crush but I am yet not in love!

Now to why I still think we have some more land to cover before I am smiling through the whole sentence proclaiming the ALM possibilities with Dynamics 365 Sales.

First of my ambition was to avoid manual steps in the deployment process. This is in my view not obligatory in CI/CD but best practice. Adding manual steps into any process is similar to the famous “broken window” policy. The broken windows policy simply explains that crime could start with the overseeing the simplest degradation in society. First it is a broken window…which ultimately could lead to a chain of events allowing the culprit to nestle its way into a controlled environment and do harm. Same goes for many processes and ALM is no different. Which leads me to my first problem with the current ALM possibilities and Dynamics 365 Sales.

It is not possible to automate all parts of Dynamics 365 Sales programatically!

But all is not lost of course. There is a feature within Azure DevOps Pipeline called manual interventions. This is by far not a new feature and not something unique for Azure DevOps.

Please reference to Microsoft Docs here


Basically, what it does is pausing your current pipeline flow and let you continue automation after you have manually deployed or made changes outside the automation within the Pipeline. This is in my view ingenious and simple at the same time. They usually go hand in hand.

This merge of manual and automation is something oftentimes missing in process-oriented solution systems.

I would love to see this type of behavioral input possibilities in other systems without requiring development and customization.

So to summarize – Dynamics 365 Sales + Azure Devops + Manual Interventions = almost complete ALM 😊

Magnus Oxenwaldt

Digital Transformations and Enterprise Architect enthusiast


Magnus has a diversified background working in various business areas and in many different roles. Holds degrees both in technology, economics and marketing.

Currently works as a Executive Business Consultant with Columbus Global.

Originally from Sweden but have worked globally and current office location Oslo, Norway.