Distributing and keeping applications updated across several Intune tenants, customers and management types is a demanding exercise. A standardized process is key for maintaining control and for enabling the possibility for a "one-to-many" delivery method. Intility has over several years utilized PowerShell Application Deployment Toolkit (PSADT) to standardize the application packaging process. The framework is well known for its functions, logging and decent UI. After 7+ years with feature requests and continuous improvement, our internal PSADT template has been extended with a modern GUI and increased functionality that has been specifically catered towards our needs.
However, the process of packaging and distributing an application is still heavily reliant on a technician that can collect the necessary installation files, wrap the PSADT toolkit around the .exe/.msi file, add the necessary code, before deploying the application to the correct management system(s). Is it possible to build a solution that can handle all aspects by simply pressing a button that says “Go”?
Before we deep-dive into AppPackBot territory, we’ll start with a short introduction to the application packaging discipline. Application packaging is a discipline that specializes in deploying applications to end users. We see it as the art of getting software from the vendor to the work surface as seamlessly as possible, at scale. Intility delivers client management for both Windows and Mac clients, and leverages Software Center and Company Portal as dedicated application hubs.
The process of publishing an application consists of several steps:
Once packaged and deployed the end user can install the application:
AppPackBot is an application packaging robot that can also be seen as a virtual colleague. This is the overall design:
There are two servers. One for the internal platforms (hereafter called “Internal”), where the parts that are shared between customers get managed (such as shared applications). The second one manages the customer specific platforms and systems such as Intune tenants and AD domains (hereafter called “External”). This split reflects our security model, where the principle of Least Privilege is key.
All integrations to the systems the bot manages are PowerShell based, so it’s only logical to create the framework of the bot in PowerShell as well.
The bot can run packaging jobs in parallel and (re)run specific steps of jobs in case of retrying or skipping steps. All jobs have a unique GUID.
An API running on both servers were considered, but that would require the user to invoke the right server to run the given step. The servers would otherwise need to talk to each other and negotiate who should do what. We’d imagine this would create an even bigger complexity for the architecture and data flows.
Metro was the chosen design of the data flow. Metro is a combination of an API gateway, Azure EventGrid and Azure Service Bus that Intility hosts in Azure for internal event driven systems. Metro is used to handle commmunication between the servers, and between the UI and the servers. The listener for Metro is a simple worker application written in C#, dotnet 6. This is the only part which is not PowerShell, because we have a great template for dotnet Metro listener.
The data field of the Metro message is an instance of one big class that contains all the data that might be relevant for any job, such as the name, metadata or application specific instructions required by the bot. This approach gives us flexibility, with little complexity. The Metro message contains metadata such as the EventType (e.g., "ContentCreated"), Subject (lazy properties or tldr), and the JSON representation of the data.
When the MetroListener receives a message, it generates a PowerShell command that imports the AppPackBot module, runs the Receive-MetroEvent (with the previously mentioned JSON), and then saves it as a script with a random name. This PowerShell script is then run asynchronously with pwsh.exe. This ensures that the MetroListener only does the bare minimum - receives the message and runs PowerShell with the previously mentioned JSON as the only input.
WinGet is integrated as a source for version and links to installation files. When we find a package in our repo that matches a package in the WinGet-Pkgs repo, we tag our package with the Package Identifier of the app from WinGet. AppPackBot has a scheduled task that compares our packages with the ones in WinGet. If a tagged package of ours is out of date according to the WinGet repo, AppPackBot creates an order for itself in our internal portal, Workplace Portal. Sometimes we notice that an app is outdated and WinGet does not know yet. In those cases, we contribute back to WinGet with the new version, so that the scheduled task can get the update the next time it runs. HomeBrew is integrated in a similar manner for our Mac packages.
To give you a better understanding of how all of this works, let's take a look at what happens when we update Microsoft Edge for Windows. We’re using this example as it touches almost every aspect of AppPackBot. You’ll likely pick up a lot of our packaging standards and security considerations along the way.
An order for updating Microsoft Edge is posted in Workplace Portal, either by a technician or the aforementioned scheduled task.
The details of this request can’t necessarily be trusted, as it can contain errors due to bugs in the automation, or human error. The automated creation of the request uses the previous request as template, meaning that errors are rare in those cases. A specialized application packaging technician must look at the request, do modifications if necessary, and then click “Send to bot”. All steps below are happening on the Internal server, unless stated otherwise. Let’s look at what happens:
The first step is to validate the request data and gather metadata information based on the request. In this case, which is an update, we need the previous package as a source of information. Due to a standardized folder and naming structure, the bot can find that by itself. If the previous version is not found or anything is wrong, log it for the technician, exit the script, and try again later with correct data.
The next step is to create the package content.
Now we need to do a deep analysis of the content. This means:
<info>Most of this is for our internal package insight and learning, but here we can also determine if the package can be tested using a clean Windows 11 VM that is isolated from customer networks and not joined to any domain or Intune tenant. Scripts containing server names might not work. This content for Microsoft Edge does not contain anything like that.<info>
The second step is to grab an available VM from a pool of Test VMs. Turn it on and wait for it to have an IP and be automatically logged on with a local user without local admin rights. Then, copy the content into the VM, plus a generated cmd script that:
Using psexec, run the generated cmd asynchronously as the logged-on user or the system account, depending on the content. In this Edge case (ha ha), run as system.
Now the VM is off to do its thing while the bot waits. PSADT writes log files and some summaries as JSON files to the log location on the VM. Every ten seconds the bot robocopies out all logs and reads how things are going. This local copy on the server also enables the logs to get into our logging infrastructure (Splunk) on behalf of the VM.
If installation succeeds, the app is detected, uninstallation succeeds and the app is no longer detected, the whole test is considered a success. In any other case, or if the whole test takes longer than a predetermined timeout (typically 30 minutes), the test is considered a failure. In case of failure, a technician will need to fix the error and either fix the rest themselves or let the bot retry its test. Edge usually tests successfully, and in that case the bot broadcasts that the test was successful.
Now that the content is created and confirmed working, it's time to get it out into the management systems. These steps are done in parallel, so keep in mind that when reading steps a, b and c. (Everything so far has been in sequence).
When the Internal Server gets the message about successful test and Software Center is a target, it simply finds the app by name in Configuration Manager and edits four things: The version number, content location, detection method script and date published. Then, it also distributes the new content to relevant Distribution Points.
Our deployment tool of choice here only requires us to copy the content into another file server. So, when the Internal Server gets the message about successful test and Citrix is a target, it broadcasts a message that instructs to update on Citrix. This detour is done to spin up a parallel process to not interfere with another one. When the Internal Server gets this message, it updates the package for Citrix.
The job is now done! All these steps were for 1 application; multiple can run in parallel.
Below you can see a step-by-step visualization of the entire process where the different instances are illuminated as you progress through each step.
<info>Through the same framework, AppPackBot can also manage to update applications for macOS through our Jamf infrastructure. The process is similar in many ways and will not be covered in detail in this article.<info>
Having successfully created a flow that can process information between different instances, the last piece of the puzzle was to create an interface that made it possible to interact with the bot in a secure and intuitive way. The visual interface shown below is our first design attempt, however it's in constant change as new features are added continuously. The interface is hosted in Workplace Portal and provides valuable insight, including:
The adaptation of AppPackBot is still in its early stage and there are plenty of features still to come. However, we can already see clear trends that this virtual colleague can help execute large parts of an already standardized and automated process. Since it was introduced internally in April 2022, we have estimated that it has helped reduce time spent on application packaging by over 850 hours. We’re excited to see what this adds up to in a full year with more automation still to come.