Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5720
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Hi, Insiders! Whether you’re signing off from work for a day, a week, or an extended period of time, communication is paramount. You deserve to enjoy your time away or have the space to focus on whatever personal matter is at hand – a clear, concise, and practical out-of-office message can make this possible. And you don’t have to craft it alone! Microsoft Copilot can help you write the perfect out-of-office message for Outlook, Teams, or other communication apps in no time at all. For when you’re on vacation When you take a much-needed break, you’ll not only want to alert people of your absence and inability to respond, but point them in the direction of whomever is available in your place to address their request. Sample prompt Hey Copilot, can you draft a funny but professional OOO message for a week-long vacation I’m taking starting March 3. I won’t be checking Outlook at all and want people to contact my boss, Megan Bowan, by email at megan.bowan@contoso.com only if it’s urgent. Otherwise, I will respond within 2-3 days after I return. Copilot’s response For a brief sick or personal day If you’re only gone for a short while, you’ll still want people to know you may be slow to get back to them. This kind of OOO message also signals to coworkers and clients that you may need their grace and patience. Sample prompt Can you write me a brief, polite, and kind OOO message mentioning that I’m dealing with a personal matter and may be slow to respond to their email, and appreciate their patience? Copilot’s response For a last-minute emergency Maybe your internet cut out or there are reports of an incoming snowstorm. Don’t fret! Whip up a quick away message and keep the work wheels turning while you tend to the issue. Sample prompt Copilot, draft an OOO response indicating that I may not respond right away because a local weather report is affecting my internet connection. Tell people I will be available by phone at (847) 555-3346 for anything urgent. Copilot’s response For an extended leave Whether you’re taking time away from work to bond with your new baby or on a sabbatical, be sure to craft an OOO message that directs people who are trying to reach you toward colleagues who are ready to step in and help keep things moving while you are away. Sample prompt Write an OOO message for a 6-month parental leave. Make it friendly, and refer people with questions/requests to my coworker, Megan Bowan, at megan.bowan@contoso.com. Let people know that I’ll return to the office on April 25. Copilot’s response Tips and tricks As you draft your OOO message with the help of Copilot, keep these tips in mind: State the duration: Clearly mention the dates you will be unavailable, as well as when you will return. Provide a reason (optional): You can briefly mention why you are out if you’re open to sharing and it feels appropriate for your audience, but it’s not mandatory. When in doubt, keep it short and void of details. Offer an alternative contact: Provide the contact information of a colleague who can assist in your absence, if there is one. Make sure that the contact you include is aware that you’re doing so and available to fill in for you before you take your leave. Set expectations: Let the sender know when they can expect a response from you. This can be a specific date, such as when you’ll return to your desk, or a time frame where you have flexibility to go through your emails diligently. Be polite and professional: Maintain a courteous tone throughout your message. If appropriate, you can inject some humor or personality. Learn about the Microsoft 365 Insider program and sign up for the Microsoft 365 Insider newsletter to get the latest information about Insider features in your inbox once a month! View the full article
  2. Co-Authors Anbu Govindasamy, Jay Witt Rosen Yanev and Ross Sponholtz Azure Chaos Studio is an Azure service that allows you to inject faults into your service to see how it responds to disruptions. In this blog, we will discuss how to introduce Azure Chaos Studio for SAP use cases by introducing additional resource pressure or failure scenarios. SAP Testing Requirement and Challenges Qualifying Azure Environment: Customers often need to finalize the sizes of Azure VMs, storage, and network for specific SAP systems. Simulating production peak workloads to test and finalize the Azure environment and SKU is an important step in this process. Application Test Plans: Developing and executing end-to-end application testing for one-time migrations, additional country onboarding, or business transformation projects can be complex. While some customers have established test plans, others are still looking for ways to achieve. Time-Consuming Process: Testing is time-intensive and requires significant effort from various SAP teams. It involves preparing test cases, developing test data, and conducting the tests. Repeated testing for fine-tuning further increases the required man-hours. SAP Traditional Testing Approaches Custom Test Scripts and Partner Solutions: Customers often rely on custom-written test scripts or partner solutions to generate volume and perform stress tests. This helps in developing solutions and mitigation plans. Oracle and HANA Testing: For Oracle, Oracle RAT testing can be leveraged, and for HANA, HANA capture and replay can be used. However, both approaches require substantial investment in preparing the environment. SAP customers are constantly looking for solutions to address peak workload situation and failure scenarios. Introducing Azure Chaos Studio for SAP Use Case Azure Chaos Studio offers a comprehensive list of fault and action library, and we have selected specific set of faults and actions to test with SAP, recognizing that Azure Chaos Studio has a broader potential. We have divided our test scenarios into two main categories: Stress Testing and Failure Testing with HA/DR use case. These tests aim to measure, understand, and enhance application resilience. Azure Chaos Studio compliments SAP testing but Chaos Studio is not meant to replace end to end SAP business process testing. Stress Testing: In the following scenarios, we introduce additional pressure to CPU, Network and Memory while performing SAP end to end load testing. We continue the test with different pressure points to learn and address configuration and resilience related findings. CPU Pressure Network Packet Loss Physical Memory Pressure CPU Pressure Example: In below screenshot, showing Azure Chaos Studio Configuration for setting up 50% CPU Pressure: In the screenshot below showing CPU Spike after triggering CPU Pressure activity: Failure Testing: For testing failure scenarios, we used the following faults with SAP process High Availability solution to validate the proper function of the cluster and failovers: Kill Process Network Disconnect "Kill Process" Example: In below screenshot, showing Azure Chaos Studio Configuration for killing HANA Index process: Next Steps QuickStart guide: Leverage QuickStart guide to get started on Azure Chaos Studio https://learn.microsoft.com/en-us/azure/chaos-studio/chaos-studio-quickstart-azure-portal Scenario Building: With Azure Chaos Studio’s steps and branches, you can build complex repeatable scenarios that can then be used on whole landscapes. Each step would include a fault that would be tested, and branches describe which steps can be run in parallel and which one are run serially. Creating scenarios doesn’t require a lot of technical knowledge and making changes to specific settings is easy given the intuitive interface. Azure Chaos Studio region availability can be found in this link Regional availability of Azure Chaos Studio | Microsoft Learn Benefits of Azure Chaos Studio In our testing, Azure Chaos Studio enabled us to explore test and resilience use cases that were not achievable with SAP application test suites alone. By introducing additional resource pressure, we were able to push the boundaries more effectively and uncover failure scenarios that were previously undetectable. Summary Azure Chaos Studio can be used to introduce additional resource pressure or simulate failure scenarios. We recommend that customers enhance their existing SAP test cases with Azure Chaos Studio techniques to add scenarios that are currently not possible, thereby improving resilience and failure handling. Useful Links: What is Azure Chaos Studio? | Microsoft Learn View the full article
  3. Accidental deletion of critical Azure resources, such as Azure Database for MySQL flexible servers, can disrupt operations. To help avoid such accidental deletions, you can use a couple of options, including Azure Resource Locks and Azure Policy. This post explains how to implement these mechanisms, and how to revive a dropped MySQL flexible server by using the Azure CLI. Note: You can set the default subscription for all Azure CLI commands mentioned in this article by using the following command az account set --subscription <name or id>. Preventing accidental deletions You can help to prevent the accidental deletion of an Azure Database for MySQL flexible server by using Azure Resource Locks or Azure Policy. Using Azure Resource Locks To protect your Azure resources, you can use Resource Locks, which you can apply at both the resource and resource group levels. When you lock a resource group, you add an additional layer of protection by ensuring that all resources within the group are safeguarded against deletion. Note: A resource group lock applies to all resources in the group, including virtual machines, storage accounts, and other services. In addition, new resources added to the resource group are automatically protected by the delete lock. Protecting a MySQL flexible server To lock a specific MySQL flexible server, run the following command: az lock create \ --name "PreventDeleteLock" \ --resource-group <RESOURCE_GROUP_NAME> \ --resource-name <MYSQL_SERVER_NAME> \ --resource-type "Microsoft.DBforMySQL/flexibleServers" \ --lock-type CanNotDelete To verify locks on a MySQL flexible server, run the following command: az lock list \ --resource-group <RESOURCE_GROUP_NAME> \ --resource-name <MYSQL_SERVER_NAME> \ --resource-type "Microsoft.DBforMySQL/flexibleServers" -o table To remove a lock on a MySQL flexible server, run the following command: az lock delete \ --name "PreventDeleteLock" \ --resource-group <RESOURCE_GROUP_NAME> \ --resource-name <MYSQL_SERVER_NAME> \ --resource-type "Microsoft.DBforMySQL/flexibleServers" Protecting a resource group containing a MySQL flexible server To lock the entire resource group, run the following command: az lock create \ --name "PreventDeleteGroupLock" \ --resource-group <RESOURCE_GROUP_NAME> \ --lock-type CanNotDelete To verify locks on a resource group, run the following command: az lock list --resource-group <RESOURCE_GROUP_NAME> -o table To remove a lock on a resource group, run the following command: az lock delete \ --name "PreventDeleteGroupLock" \ --resource-group <RESOURCE_GROUP_NAME> Note that if you attempt to delete a flexible server that has a CanNotDelete lock, the following error message appears: The scope '/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRG/providers/Microsoft.DBforMySQL/flexibleServers/sampleMySQL' cannot perform delete operation because following scope(s) are locked: '/subscriptions/00000000-0000-0000-0000-000000000000/resourcegroups/sampleRG/providers/Microsoft.DBforMySQL/flexibleServers/sampleMySQL'. Please remove the lock and try again. If you attempt to delete a resource group that has a CanNotDelete lock, the following error message appears: The resource group sampleRG is locked and can't be deleted. Click here to manage locks for this resource group. Best practices for using Resource Locks When working with resource locks, be sure to keep the following best practices in mind. Apply locks strategically: Lock critical resources individually and use group-level locks for comprehensive protection. Use Role-Based Access Control (RBAC): Ensure only authorized personnel can remove or modify locks. Automate lock management: Incorporate lock creation into deployment scripts or pipelines to enforce consistency. Document locks: Maintain an updated inventory of locks you've applied to prevent confusion among team members. Applying delete locks to individual MySQL flexible servers and their resource groups helps ensure that critical Azure resources are not accidentally deleted. While locks provide protection, implement them carefully to avoid disruptions in resource management workflows Note: Creating or deleting a lock requires having access to the Microsoft.Authorization/* or Microsoft.Authorization/locks/* actions. The Owner and User Access Administrator roles are the two built-in roles granted those actions. You can also implement delete locks using the Azure portal, as part of an ARM template, PowerShell script, or REST API. For more information, see the article Lock your resources to protect your infrastructure. Using Azure Policy Azure Policy provides a governance framework to enforce rules, including preventing the deletion of resources. Tags allow you to categorize Azure resources based on metadata, such as Environment, Project, or Owner. By combining tags with Azure Policy, you can enforce governance selectively, targeting only resources with specific tags. This section explains how to create and assign an Azure Policy that blocks deletion of MySQL flexible servers by using specific tags. Creating and assigning the policy To create and assign the Policy, use the following guidance. 1. Define a custom Azure policy for tagged resources To block deletion for MySQL flexible servers that have specific tags, create a custom policy definition. The policy should check for a specific tag key-value pair (action:DONOTDELETE) and deny delete operations if the resource is a MySQL flexible server. To block the deletion of the parent resource group, set the cascadeBehaviors parameter to Deny. To define a custom policy for tagged resources, create a JSON file (mysql-policy.json) with the following content: { "if": { "allOf": [ { "field": "type", "equals": "Microsoft.DBforMySQL/flexibleServers" }, { "field": "tags.action", "equals": "DONOTDELETE" } ] }, "then": { "effect": "denyAction", "details": { "actionNames": [ "delete" ], "cascadeBehaviors": { "resourceGroup": "deny" } } } } 2. Create the policy definition To create the policy definition, at the Azure CLI, run the following command: az policy definition create \ --name "PreventDeletionOfTaggedMySQLFlexibleServers" \ --description "Prevents deletion of MySQL Flexible Servers with specific tags" \ --display-name "Prevent Deletion of Tagged MySQL Flexible Servers" \ --rules "mysql-policy.json" \ --mode Indexed 3. Assign the policy Next, you need to assign the policy to a subscription or resource group and specify the target tag. To assign the Policy at the subscription level, run the following command az policy assignment create \ --name "PreventDeletionOfTaggedMySQLFlexibleServersAssignment" \ --policy "PreventDeletionOfTaggedMySQLFlexibleServers" \ --scope "/subscriptions/<SUBSCRIPTION_ID>" To assign the policy at the resource group level, run the following command: az policy assignment create \ --name "PreventDeletionOfTaggedMySQLFlexibleServersAssignment" \ --policy "PreventDeletionOfTaggedMySQLFlexibleServers" \ --scope "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>" To exclude specific resources from the policy, during policy assignment, run the command with the --not-scopes parameter: az policy assignment create \ --name "PreventDeletionOfTaggedMySQLFlexibleServersAssignment" \ --policy "PreventDeletionOfTaggedMySQLFlexibleServers" \ --scope "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>" \ --not-scopes "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>/providers/Microsoft.DBforMySQL/flexibleServers/<SERVER_NAME>" 4. Verify the policy To verify the policy, run the following command: az policy assignment list --scope "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>" To ensure that the policy is correctly assigned and working, attempt to delete a tagged MySQL flexible server. An error message similiar to the following should appear: Deletion of resource ‘sampleMySQL’ was disallowed by policy. To check policy compliance via the Azure portal, navigate to Azure Portal > Policy > Compliance, and then review the compliance state for the assigned policy. 5. Delete the policy (if necessary) If you need to delete the policy, run the following commands: az policy assignment delete –-name "PreventDeletionOfTaggedMySQLFlexibleServersAssignment" --scope "/subscriptions/<SUBSCRIPTION_ID>/resourceGroups/<RESOURCE_GROUP_NAME>" az policy definition delete –-name "PreventDeletionOfTaggedMySQLFlexibleServers" Advantages of using tag-based policies Using tag-based policies offers the following advantages. Selective Enforcement: Protect only resources with specific tags, minimizing unintended restrictions. Flexibility: Easily manage resources by updating their tags. Scalability: Apply policies across subscriptions or resource groups without affecting untagged resources. Tag-based Azure policies provide a powerful mechanism to prevent accidental deletion of MySQL Flexible Servers. By leveraging tags, you can enforce governance in a targeted and scalable manner. This approach ensures that your critical resources remain protected while maintaining flexibility for your operations. For more information, see the Azure Policy documentation. Recovering an accidentally deleted MySQL flexible server An Azure Database for MySQL flexible server takes snapshot backups of data files and stores them in local redundant storage. You can use these backups to restore a server to any point-in-time during your configured backup retention period. The default backup retention period is seven days. You can optionally configure the database backup retention period from 1 to 35 days. Note: All backups are encrypted using AES 256-bit encryption for the data stored at rest. You can only access and restore the server backup from the Azure subscription in which the server initially resided. To recover a deleted Azure Database for MySQL flexible server resource within the backup retention period, take the following recommended steps. Important: You can only access and restore a deleted MySQL flexible server if the server backup has not been deleted from the system. Before you restore a deleted Azure Database for MySQL Flexible Server instance, collect the following information: The Azure Subscription Id and Resource group hosting the original server. The location in which the original server was created. The timestamp showing when the original server was dropped. To get this information, query the Activity Log of the subscription by running the following command: az monitor activity-log list \ --subscription "<SUBSCRIPTION_ID>" \ --start-time "<StartTimeStampInUTC>" \ --end-time "<EndTimeStampInUTC>" \ --query "[?operationName.value=='Microsoft.DBforMySQL/flexibleServers/delete'].{ResourceId : resourceId, DeleteTimeStamp : submissionTimestamp}" \ --status "Succeeded" \ --output table Note: Set StartTimeStampInUTC and EndTimeStampInUTC with approximate values of when you might have dropped the server. StartTimeStampInUTC and EndTimeStampInUTC should be in ISO 8601 format After you have this information, trigger a Point-in-Time Restore by running the following command: az rest --method put \ --url "https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.DBforMySQL/flexibleServers/{OriginalServerName}?api-version=2024-06-01-preview" \ --headers '{"Content-Type": "application/json"}' \ --body ' { "location": "<Dropped Server Location>", "properties": { "restorePointInTime": "<DeleteTimeStamp> - 15 minutes", "createMode": "PointInTimeRestore", "sourceServerResourceId": "/subscriptions/<SubscriptionId>/resourceGroups/<ResourceGroup>/providers/Microsoft.DBforMySQL/flexibleServers/<OriginalServerName>" } }' If the restore request is submitted successfully, the following response appears: { "operation": "RestoreSnapshotManagementOperation", "startTime": "2024-12-03T22:27:45.937Z" } Server creation time varies depending on the database size and compute resources provisioned on the original server. To monitor the restore status, run the following command: az monitor activity-log list \ --subscription "<SUBSCRIPTION_ID>" \ --resource-group <RESOURCE_GROUP_NAME> \ --offset 1h \ --query "[?operationName.value=='Microsoft.DBforMySQL/flexibleServers/write']" Restoring a dropped virtual network enabled server involves specifying additional network properties such as the delegated subnet resource ID and the private DNS zone Azure Resource Manager resource ID. Here is an example request to restore your server with the necessary network configurations. az rest --method put \ –url " https://management.azure.com/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRG/providers/Microsoft.DBforMySQL/flexibleServers/samplemysql?api-version=2024-06-01-preview" \ --headers '{"Content-Type": "application/json"}' \ --body ' { "location": "Canada Central", "properties": { "restorePointInTime": "2024-12-03T21:59:29Z", "createMode": "PointInTimeRestore", "sourceServerResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRG/providers/Microsoft.DBforMySQL/flexibleServers/samplemysql", "network": { "delegatedSubnetResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRG/providers/Microsoft.Network/virtualNetworks/azure_mysql_vnet/subnets/azure_mysql_subnet", "privateDnsZoneResourceId": "/subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/sampleRG/providers/Microsoft.Network/privateDnsZones/ samplemysql.private.mysql.database.azure.com", "publicNetworkAccess": "Disabled" } } }' Be sure to try an earlier timestamp for “restorePointInTime” if the following message appears in the Activity Log: { "status": "Failed", "error": { "code": "ResourceOperationFailure", "message": "The resource operation completed with terminal provisioning state 'Failed'.", "details": [ { "code": "RestoreSourceServerNotExist", "message": "Restore source server 'sampleMySQL' does not exist for requested point in time '12/3/2024 9:59:29 PM' of location 'canadacentral'." } ] } } Open a support ticket for further troubleshooting if the following message appears in the Activity Log: { "status": "Failed", "error": { "code": "ResourceOperationFailure", "message": "The resource operation completed with terminal provisioning state 'Failed'.", "details": [ { "code": "InternalServerError", "message": "An unexpected error occured while processing the request. Tracking ID: '48e52499-3857-4ea6-aaf1-75586a572601'" } ] } } Summary You should now have all the information you need to prevent and recover from the accidental deletion of an Azure Database for MySQL flexible server. If you have any queries or suggestions, please let us know by leaving a comment below or by contacting directly us at AskAzureDBforMySQL@service.microsoft.com. Please provide any feedback related to the product in the Azure Database for MySQL Community. View the full article
  4. Hello, I hope this is the right discussion group for this. Could someone please tell me if the non-incentivized recognitions for CPOR Claims (both revenue association and usage association) means the recognition is reflected in the scoring for Solutions Partner Designation? If not, in what way is the association recognized? Thank you in advance. View the full article
  5. We are modernizing our Word VBA add-in to Office.js. We are looking for a way to grab a graphic image and toggle its visibility (which we can reach from the Selection pane). Can this functionality be added? View the full article
  6. I have been extremely disappointed with Copilot after a year plus of struggles. I want to create a company chat bot based on core documents. One for HR, one for company methods, etc. It seems CoPilot Studio is the only way -- it is arcane, complex, confusing -- i have "something" working but not at all effective. If someone has a streamlined approach to using CoPilot Studio, I am all about it, but I struggled for hours to get a mediocre bot. The value of O365 is the collection of thousands of documents. CoPilot seemingly has no access to them. Microsoft is adding small plus up features to excel and ppt, but the power of AI is to process large quantities of data and help a human make sense. This should be microsofts competitive differentiator, but it is totally absent. Am i missing something? View the full article
  7. The Future of AI blog series is an evolving collection of posts from the AI Futures team in collaboration with subject matter experts across Microsoft. In this series, we explore tools and technologies that will drive the next generation of AI. Explore more at: https://aka.ms/the-future-of-ai In the previous post, we introduced Contoso Chat – an open-source RAG-based retail chat sample for Azure AI Foundry, that serves as both an AI app template (for builders) and the basis for a hands-on workshop (for learners). And we briefly talked about five stages in the developer workflow (provision, setup, ideate, evaluate, deploy) that take them from the initial prompt to a deployed product. But how can that sample help you build your app? The answer lies in developer tools and AI App templates that jumpstart productivity by giving you a fast start and a solid foundation to build on. Imagine this familiar scenario. You are a traditional application developer in an enterprise and have been asked to build an AI-powered chat application that answers questions about your products. Where do you even start? Do you know what architecture to use? (Think Retrieval Augmented Generation) Do you know what “models” to use? (Chat model, Embeddings model) Do you know what “services” you might need? (Safety, Search, Model Hosting) Do you know how to “build” the application around them? (ideate-evaluate-deploy) Can you make the development workflow repeatable across teams? (collaborative) If this is not complex enough, consider the fast-growing ecosystem of models, frameworks and tools that are coming up around AI. How can you flatten your learning curve? Azure AI App Templates can help in three ways: They implement infrastructure as code – with template files that can be version controlled and activated consistently across teams, with the Azure Developer CLI. They use configuration as code – with dev container files for a Docker container with all dependencies pre-installed, that can be activated consistently across teams, in the cloud (with GitHub Codespaces) or locally (with Docker Desktop). They provide a working app foundation with a defined application architecture. Now, instead of having to figure out your design from scratch, you can start with a template that has the key requirements for your scenario – and customize it for your needs (with updated models, data, app source and evaluation metrics). Let’s revisit that scenario now. Want to build a custom retail chatbot grounded in your own data? Here’s how you can make that happen with Contoso Chat. Discover it - with AI App Template Gallery Let’s start with the discovery process. How would you have found the right template for your needs if I hadn’t told you about it? You’d start with the AI app templates gallery as shown below. Simply use the filters in the gallery to find the AI template that supports your use case. Let’s say you came in with the following criteria: You have your customer data in an Azure CosmosDB You have product indexes built with Azure AI Search You want to use Azure OpenAI Service models for chat and embeddings You want to build your application using the Azure AI Foundry SDK in Python Fill those requirements in – and you will see the recommended template is for the Contoso Chat sample. Click on the tile to get more details like the resources used, as shown below. Develop with GitHub Codespaces You have a template – what do you do now? The first thing you want to do is to take the template for a spin and see if the features and experience match your needs. Activating that template requires you to use the Azure Developer CLI tool (more on that in a minute) – and install additional dependencies (for example: Azure AI Foundry SDK and individual Python SDKs for services used, and VS Code extensions to boost productivity). Built-in devcontainer support in template repos makes this a 1-click experience, as we’ll see in a minute. But you also have a choice – you can fork the existing sample to get a sandbox copy that you can periodically sync with the original for updates. Or you can use “azd init” to create an instance of that template (at the current time) and use that as the basis for a new repo. We recommend the first approach for learners, and the second for builders. The first approach allows you to track updates to the sample and learn about new features or tools. Contoso Chat has a prebuild-ready branch used with this workshop, as shown in the figure on the right. Want to jumpstart your learning journey? Use this prebuild link to launch the wizard below - and setup your GitHub Codespaces environment in minutes, with 1 click. Provision with Azure Developer CLI Okay, so you found the right template for your needs. And you have your development environment running in GitHub Codespaces to start building. And all this took minutes. So, what do you need to do to provision, deploy, and explore, the sample app? You need just one tool (azd) – and it’s already pre-installed in your GitHub Codespaces by default! The comic below gives you a visual guide to the Azure Developer CLI documentation explaining what it does, how it works, why it matters, and how to use it with templates like Contoso Chat. Want to get a more structured understanding of the Azure Developer CLI workflow? Check out this free learning path that covers the same information with hands-on labs. For now, we just want to deploy the template and explore the application. To do that, launch the GitHub Codespaces session as explained earlier, then wait till you see the Visual Studio Code environment become active in that browser tab. Going from template to deployment is just two steps away: Authenticate with Azure (using “azd auth –use-device-code”) to connect the development environment with an active Azure subscription. Deploy the application with one command (“azd up”) – which provisions the required resources, populates required data, and deploys the application. You will now have a RAG-based retail chat AI deployed to an Azure Container Apps hosted endpoint that you can test using the built-in Swagger (“/docs”) endpoint – or integrate with your external applications or clients for driving a better user experience. The deployment process will take a few minutes to complete with minimal involvement needed from you or your IT admins at this stage! You can now visit the following “portals” to explore the deployment in more detail: Visit Azure Portal to understand the resource deployments associated with this architecture - specifically the Azure AI hub, project, and services resources that are typical for an Azure AI Foundry project. You can also explore the data samples used (product index in Azure AI Search, customer database in Azure CosmosDB) to get a sense for the schema and usage (e.g., vector search with semantic ranking). Visit Azure AI Foundry portal to manage your generative AI application needs in one place – from discovering and deploying new models, to activating content filters for safety, to viewing application traces or evaluation results when enabled. The Azure AI Foundry portal helps you monitor your application with enterprise-grade management features. Recap and Next Steps We started off this post by asking “how can an AI template help you build your app?” with specific focus on improving developer productivity for jumpstarting new projects. And we saw how AI app templates solved three challenges for us: Reuse vs. Build from scratch – knowing the right AI architecture and components to use can be complicated. Start with a foundation template and customize instead. Configuration as code – get a consistent, reproducible development environment with a prebuilt dev container that can be activated in the cloud, or on local device. Infrastructure as code – use AI app templates with the Azure Developer CLI, to ensure a consistent and reproducible provisioning experience, with minimal developer effort. Now, you have a working app and development environment. Next, it’s time to customize it to your needs. And that means understanding how that application was designed and evolved from prompt to prototype. Join me next time to look at how we can ideate with Prompty! Are you ready to start developing? Here are some resources that can help! AI app templates gallery - Discover other AI solution templates to deconstruct. Contoso Chat repository - Browse the README for a self-guided quickstart. Azure AI Foundry - Discover AI models and services tailored to your use case. Explore the management center to manage resources, quotas and more throughout the dev lifecycle. View the full article
  8. We are modernizing our PowerPoint VBA add-in to Office.js. One of the functions of our add-in is to format charts into company branding. Since we cannot manipulate default Microsoft chart elements, we use text boxes for the chart title, subtitle and axis labels. We group these text boxes within the chart itself. In Office.js, we have been to able to identify text boxes but we cannot select them or group them within a chart. Our VBA add-in also can show a submenu of functions providing a simplified interface to manipulate the series colors in the charts. Can this functionality be added? View the full article
  9. We are modernizing our Excel VBA add-in to Office.js. One of the functions of our add-in is to format charts into company branding. Since we cannot manipulate default Microsoft chart elements, we use text boxes for the chart title, subtitle, axis labels and footer elements (e.g., notes, source). We group these text boxes within the chart itself. In Office.js, we have been to able to identify text boxes but we cannot select them or group them within a chart. Can this functionality be added? View the full article
  10. Implementing Rate Limiting for Azure OpenAI with Cosmos DB Azure API Management (APIM) provides built-in rate limiting policies, but implementing sophisticated quota management for Azure OpenAI services requires a more tailored approach. This solution combines Azure Functions, Cosmos DB, and stored procedures to implement cost-based quota management with automatic renewal periods. Architecture Client → APIM (with rate limit config) → Azure Function Proxy → Azure OpenAI ↓ Cosmos DB (quota tracking) Technical Implementation 1. Rate Limit Configuration in APIM The rate limiting configuration is injected into the request body by APIM using a policy fragment. Here's an example for a basic $5 daily quota: <set-variable name="rateLimitConfig" value="@{ var productId = context.Product.Id; var config = new JObject(); config["counterKey"] = productId; config["startDate"] = "2025-03-02T00:00:00Z"; config["renewal_period"] = 86400; config["quota"] = 5; return config.ToString(); }" /> <include-fragment fragment-id="RateLimitConfig" /> For more advanced scenarios, you can customize token costs. Here's an example for a $10 quota with custom token pricing: <set-variable name="rateLimitConfig" value="@{ var productId = context.Product.Id; var config = new JObject(); config["counterKey"] = productId; config["startDate"] = "2025-03-02T00:00:00Z"; config["renewal_period"] = 86400; config["explicitEndDate"] = null; config["quota"] = 10; config["input_cost_per_token"] = 0.00003; config["output_cost_per_token"] = 0.00006; return config.ToString(); }" /> <include-fragment fragment-id="RateLimitConfig" /> Flexible Counter Keys The counterKey parameter is highly flexible and can be set to any unique identifier that makes sense for your rate limiting strategy: Product ID: Limit all users of a specific APIM product (e.g., "starter", "professional") User ID: Apply individual limits per user Subscription ID: Track usage at the subscription level Custom combinations: Combine identifiers for granular control (e.g., "product_starter_user_12345") Rate Limit Configuration Parameters ParameterDescriptionExample ValueRequiredcounterKeyUnique identifier for tracking quota usage"starter10" or "user_12345"YesquotaMaximum cost allowed in the renewal period10YesstartDateWhen the quota period begins. If not provided, the system uses the time when the policy is first applied"2025-03-02T00:00:00Z"Norenewal_periodSeconds until quota resets (86400 = daily). If not provided, no automatic reset occurs86400NoendDateOptional end date for the quota periodnull or "2025-12-31T23:59:59Z"Noinput_cost_per_tokenCustom cost per input token0.00003Nooutput_cost_per_tokenCustom cost per output token0.00006No Scheduling and Time Windows The time-based parameters work together to create flexible quota schedules: If the current date falls outside the range defined by startDate and endDate, requests will be rejected with an error The renewal window begins either on the specified startDate or when the policy is first applied The renewal_period determines how frequently the accumulated cost resets to zero Without a renewal_period, the quota accumulates indefinitely until the endDate is reached 2. Quota Checking and Cost Tracking The Azure Function performs two key operations: Pre-request quota check: Before processing each request, it verifies if the user has exceeded their quota Post-request cost tracking: After a successful request, it calculates the cost and updates the accumulated usage Cost Calculation For cost calculation, the system uses: Custom pricing: If input_cost_per_token and output_cost_per_token are provided in the rate limit config LiteLLM pricing: If custom pricing is not specified, the system falls back to LiteLLM's model prices for accurate cost estimation based on the model being used The function returns appropriate HTTP status codes and headers: HTTP 429 (Too Many Requests) when quota is exceeded Response headers with usage information: x-counter-key: starter5 x-accumulated-cost: 5.000915 x-quota: 5 3. Cosmos DB for State Management Cosmos DB maintains the quota state with documents that track: { "id": "starter5", "counterKey": "starter5", "accumulatedCost": 5.000915, "startDate": "2025-03-02T00:00:00.000Z", "renewalPeriod": 86400, "renewalStart": 1741132800000, "endDate": null, "quota": 5 } A stored procedure handles atomic updates to ensure accurate tracking, including: Adding costs to the accumulated total Automatically resetting costs when the renewal period is reached Updating quota values when configuration changes Benefits Fine-grained Cost Control: Track actual API usage costs rather than just request counts Flexible Quotas: Set daily, weekly, or monthly quotas with automatic renewal Transparent Usage: Response headers provide real-time quota usage information Product Differentiation: Different APIM products can have different quota levels Custom Pricing: Override default token costs for special pricing tiers Flexible Tracking: Use any identifier as the counter key for versatile quota management Time-based Scheduling: Define active periods and automatic reset windows for quota management Getting Started Deploy the Azure Function with Cosmos DB integration Configure APIM policies to include rate limit configuration Set up different product policies for various quota levels For a detailed implementation, visit our GitHub repository. Tags: #AzureOpenAI #APIM #CosmosDB #RateLimiting #Serverless View the full article
  11. Large number of session hosts (1700+) is difficult to handle. Following suggestions would help: Allow to customize the the filters on the different columnsFilter on Power State Filter on Health StateFilter on user (with wild cards)Allow to export in CSV with filters appliedToday, the CSV export exports all the VMs whatever the filter It is a workaround of the lack of filters on different fieldsThanks for reading View the full article
  12. Microsoft partners like John Snow Labs and AI Software Solutions GmbH deliver transact-capable offers, which allow you to purchase directly from Azure Marketplace. Learn about this offer below: John Snow Labs - Healthcare NLP: This package curated for the healthcare sector includes specialized natural language processing (NLP) Python libraries to accelerate text annotation processes. Python developers, data scientists, machine learning engineers, and research groups can use it to extract meaningful insights from unstructured documents, amplifying the efficiency of data interpretation in healthcare. AI.S² Demand Forecast Solution: Get accurate sales predictions with this tool from AI Software Solutions GmbH, a paiqo GmbH company. AI.S² Demand Forecast Solution can import your organization's historical data and enrich it with external data (such as weather data or economic data), then use AI algorithms to analyze it and generate forecasts. This will give you better understanding of cause-effect issues and help you strengthen your value chain. View the full article
  13. We are excited to announce support for Locust, a Python-based open-source performance testing framework, in Azure Load Testing. As a cloud-based and fully managed service for performance testing, Azure Load Testing helps you easily achieve high scale loads and quickly identify performance bottlenecks. We now support two load testing frameworks – Apache JMeter and Locust. You can use your existing Locust scripts and seamlessly leverage all the capabilities of Azure Load Testing. Locust is a developer friendly framework that lets you write code to create load test scripts as opposed to using GUI based test creation. You can check-in the scripts into your repos, seek peer feedback, and better maintain the scripts as they evolve – just like you would do for your product code. As for extensibility, whether it is sending metrics to a database, simulating realistic user behavior, or using custom load patterns, you can just write Python code and achieve your objective. You can also use Azure Load Testing to integrate locust based load tests in to your CI/CD workflows. Very soon, you will be able to get started from Visual Studio Code (VS Code) and leverage the power of AI to get a Locust script generated. You can then run it at scale using Azure Load Testing and get the benefits of a managed service, all from within the VS Code experience. User feedback in action During the preview phase, many of you tried out Locust in Azure Load Testing and provided us invaluable feedback. We have put that into action and improved the offering to further enhance your experience. We have ensured that your experience of getting a Locust script working with Azure Load Testing is frictionless with zero to minimal modifications needed in your test scripts. This ensures that you can seamlessly run the same scripts in your local environment with lower load and on Azure Load Testing with high-scale load. You can now install the dependencies required for your test script by specifying them in a ‘requirements.txt’ file and uploading it along with your test script. If your test requires any supporting Python modules in addition to your test script, you can now upload multiple python files and specify the main test script from which the execution should begin. If you use a Locust configuration file to define load or any other configuration for your load test, you can just upload your .conf file along with your test script. The precedence order followed by Locust to override the values is honored. Locust plugins are already available on the Azure Load Testing test engines. You can use them without having to separately upload or configure the plugins. You have multiple options to integrate Locust load tests into your automation flows. You can use CI/CD integration, Azure CLI, or REST APIs. Very soon, you’d also be able to use Azure SDKs. Using Locust scripts with Azure Load Testing All the capabilities of Azure Load Testing that help you configure your tests, generate high scale load, troubleshoot your tests, and analyze test results are supported for Locust-based tests. Let’s see this in action using a simple example of a user browsing multiple pages in a web application. You can create a Locust script for this scenario by writing a few lines of Python code. Figure 1: A sample Locust script Once you run the script in your local environment and ensure that it is working as expected, you can run the same on Azure Load Testing. To create a Locust-based test in Azure Load Testing, On the ‘Test plan’ tab, select ‘Locust’ as the load testing framework and upload your test script. You can also upload any supporting artifacts here. Figure 2: Test framework selection On the ‘Load’ tab, configure the load that you want to generate. You can specify the overall number of users required and the spawn rate in load configuration. Azure Load Testing automatically populates the number of test engines required. You can update the count, if required. You can also define the overall load required and the load pattern in your Locust script, or in a Locust configuration file. In that case, you can select the engine instances required to generate the target load. Figure 3: Load configuration You also have the options to parameterize your test script, monitor app components and define test criteria. Once you create the test and run it, you can see a rich test results dashboard that shows the performance metrics for the overall user journey as well as for specific pages. You can slice and dice the metrics to better understand performance and identify any anomalies. You can also correlate the client-side metrics with the server-side metrics from your app components to easily identify performance bottlenecks. Figure 4: Results dashboard showing summary statistics Figure 5: Results dashboard showing client-side metrics Get started Get started with Locust on Azure Load Testing today and let us know how it enhanced your performance testing journey. Stay tuned for more exciting updates! You can learn more about using Locust with Azure Load Testing here. Have questions or feedback? Drop a comment below or share your feedback with us in the Azure Load Testing community! View the full article
  14. Hello, this is about activating the eligible role using the ARM API. Created a custom role (only with admin login action) no read action- coz we do not want user to see the machines in the portal. We have a ps script that is used inside the virtual machine to activate the eligible role using ARM API the role is assigned on subscription level and activated on resource level, using inheritance. It was working great, but from couple of weeks, we get this errors. "code":"GatewayAuthenticationFailed","message":"Gateway authentication failed for 'Microsoft.Authorization'AuthorizationFailed Message: The client '******@xxx.com' with object id 'xxxxa' does not have authorization to perform action 'Microsoft.Resources/subscriptions/resourcegroups/write' over scope '/subscriptions/xxxa/resourcegroups/ResGrp0213' or the scope is invalid. If access was recently granted, please refresh your credentialsrest api used- PUT https://management.azure.com/{scope}/providers/Microsoft.Authorization/roleAssignmentScheduleRequests/{roleAssignmentScheduleRequestName}?api-version=2020-10-01 it's a random issues on the random users.. #Azure #AVD #AzureVirtualMachines. View the full article
  15. Summary The article provides guidance on using the .NET Profiler Trace feature in Microsoft Azure App Service to diagnose performance issues in ASP.NET applications. It explains how to configure and collect the trace by accessing the Azure Portal, navigating to the Azure App Service, and selecting the "Collect .NET Profiler Trace" feature. Users can choose between "Collect and Analyze Data" or "Collect Data only" and must select the instance to perform the trace on. The trace stops after 60 seconds but can be extended up to 15 minutes. After analysis, users can view the report online or download the trace file for local analysis, which includes information like slow requests and CPU stacks. The article also details how to analyze the trace using Perf View, a tool available on GitHub, to identify performance issues. Additionally, it provides a table outlining scenarios for using .NET Profiler Trace or memory dumps based on various factors like issue type and symptom code. This tool is particularly useful for diagnosing slow or hung ASP.NET applications and is available only in Standard or higher SKUs with the Always On setting enabled. In this article How to configure and collect the .NET Profiler Trace How to download the .NET Profiler Trace How to analyze a .NET Profiler Trace When to use .NET Profilers tracing vs. a memory dump The tool is exceptionally suited for scenarios where an ASP.NET application is performing slower than expected or gets hung. As shown in Figure 1, this feature is available only in Standard or higher Stock Keeping Unit (SKU) and Always On is enabled. If you try to configure .NET Profiler Trace, without both configurations the following messages is rendered. Azure App Service Diagnose and solve problems blade in the Azure Portal error messages Error – This tool is supported only on Standard, Premium, and Isolated Stock Keeping Unit (SKU) only with AlwaysOn setting enabled to TRUE. Error – We determined that the web app is not "Always-On" enabled and diagnostic does not work reliably with Auto Heal. Turn on the Always-On setting by going to the Application Settings for the web app and then run these tools. How to configure and collect the .NET Profiler Trace To configure a .NET Profiler Trace access the Azure Portal and navigate to the Azure App Service which is experiencing a performance issue. Select Diagnose and solve problems and then the Diagnostic Tools tile. Azure App Service Diagnose and solve problems blade in the Azure Portal Select the "Collect .NET Profiler Trace" feature on the Diagnostic Tools blade and the following blade is rendered. Notice that you can only select Collect and Analyze Data or Collect Data only. Choose the one you prefer but do consider having the feature perform the analysis. You can download the trace for offline analysis if necessary. Also notice that you need to **select the instance** on which you want to perform the trace. In the scenario, there is only one, so the selection is simple. However, if your app runs on multiple instances, either select them all or if you identify a specific instance which is behaving slowly, select only that one. You realize the best results if you can isolate a single instance enough so that the request you sent is the only one received on that instance. However, in a scenario where the request or instance is not known, the trace adds value and insights. Adding a thread report provides list of all the threads in the process is also collected at the end of the profiler trace. The thread report is useful especially if you are troubleshooting hung processes, deadlocks, or requests taking more than 60 seconds. This pauses your process for a few seconds until the thread dump is generated. CAUTION: a thread report is NOT recommended if you are experiencing High CPU in your application, you may experience issues during trace analysis if CPU consumption is high. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace blade in the Azure Portal There are a few points called out in the previous image which are important to read and consider. Specifically the .NET Profiler Trace will stop after 60 seconds from the time that it is started. Therefore, if you can reproduce the issue, have the reproduction steps ready before you start the profiling. If you are not able to reproduce the issue, then you may need to run the trace a few times until the slowness or hang occurs. The collection time can be increased up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. After selecting the instance to perform the trace on, press the Collect Profiler Trace button, wait for the profiler to start as seen here, then reproduce the issue or wait for it to occur. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status starting window After the issue is reproduced the .NET Profiler Trace continues to the next step of stopping as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status stopping window Once stopped, the process continues to the analysis phase if you selected the Collect and Analyze Data option, as seen in the following image, otherwise you are provided a link to download the file for analysis on your local machine. The analysis can take some time, so be patient. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status analyzing window After the analysis is complete, you can either view the Analysis online or download the trace file for local development. How to download the .NET Profiler Trace Once the analysis is complete you can view the report by selecting the link in the Reports column, as seen here. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace status complete window Clicking on the report you see the following. There is some useful information in this report, like a list of slow requests, Failed Requests, Thread Call stacks, and CPU stacks. Also shown is a breakdown of where the time was spent during the response generation into categories like Application Code, Platform, and Network. In this case, all the time is spent in the Application code. Azure App Service Diagnose and solve problems, Collect .NET Profiler Trace review the Report To find out specifically where in the Application Code this request performs the analysis of the trace locally. How to analyze a .NET Profiler Trace After downloading the network trace by selecting the link in the Data column, you can use a tool named Perf View which is downloadable on GitHub here. Begin by opening Perf View and double-clicking on the ".DIAGSESSION" file, after some moments expand it to render the Event Trace Log (ETL) file, as shown here. Analyze Azure App Service .NET Profiler Trace with Perf View Double-click on the Thread Time (with startStop Activities) Stacks which open up a new window similar to shown next. If your App Service is configured as out-of-process select the dotnet process which is associated to your app code. If your App Service is in-process select the w3wp process. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process Double-click on dotnet and another window is rendered, as shown here. From the previous image, .NET Profiler Trace reviews the Report, it is clear where slowness is coming from, find that in the Name column or search for it by entering the page name into the Find text box. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, method, and class discovery Once found right-click on the row and select Drill Into from the pop-up menu, shown here. Select the Call Tree tab and the reason for the issue renders showing which request was performing slow. Analyze Azure App Service .NET Profiler Trace with Perf View, dotnet out-of-process, root cause This example is relatively. As you analyze more performance issues using Perf View to analyze a .NET Profiler Trace your ability to find the root cause of more complicated performance issues can be realized. When to use .NET Profilers tracing vs. a memory dump That same issue is seen in a memory dump, however there are some scenarios where a .NET Profile trace would be best. Here is a table, Table 1, which describes scenarios for when to capture a .NET profile trace or to capture a memory dump. Issue TypeSymptom CodeSymptomStackStartup IssueIntermittentScenarioPerformance200Requests take 500 ms to 2.5 seconds, or takes <= 60 secondsASP.NET/ASP.NET CoreNoNoProfilerPerformance200Requests take > 60 seconds & < 230 secondsASP.NET/ASP.NET CoreNoNoDumpPerformance502.3/500.121/503Requests take >=120 to <= 230 secondsASP.NETNoNoDump, ProfilerPerformance502.3/500.121/503Requests timing out >=230ASP.NET/ASP.NET CoreYes/NoYes/NoDumpPerformance502.3/500.121/503App hangs or deadlocks (ex: due to async anti-pattern)ASP.NET/ASP.NET CoreYes/NoYes/NoDumpPerformance502.3/500.121/503App hangs on startup (ex: caused by nonasync deadlock issue)ASP.NET/ASP.NET CoreNoYes/NoDumpPerformance502.3/500.121Request timing out >=230 (time out)ASP.NET/ASP.NET CoreNoNoDumpAvailability502.3/500.121/503High CPU causing app downtimeASP.NETNoNoProfiler, DumpAvailability502.3/500.121/503High Memory causing app downtimeASP.NET/ASP.NET CoreNoNoDumpAvailability500.0[121]/503SQLException or Some Exception causes app downtimeASP.NETNoNoDump, ProfilerAvailability500.0[121]/503App crashing due to fatal exception at native layerASP.NET/ASP.NET CoreYes/NoYes/NoDumpAvailability500.0[121]/503App crashing due to exit code (ex: 0xC0000374)ASP.NET/ASP.NET CoreYes/NoYes/NoDumpAvailability500.0App begin nonfatal exceptions (during a context of a request)ASP.NETNoNoProfiler, DumpAvailability500.0App begin nonfatal exceptions (during a context of a request)ASP.NET/ASP.NET CoreNoYes/NoDump Table 1, when to capture a .NET Profiler Trace or a Memory Dump on Azure App Service, Diagnose and solve problems Use this list as a guide to help decide how to approach the solving of performance and availability applications problems which are occurring in your application source code. Here are some descriptions regarding the column heading. - Issues Type – Performance means that a request to the app is responding or processing the response but not at a speed in which it is expected to. Availability means that the request is failing or consuming more resources than expected. - Symptom Code – the HTTP Status and/or sub status which is returned by the request. - Symptom – a description of the behavior experienced while engaging with the application. - Stack – this table targets .NET, specifically ASP.NET, and ASP.NET Core applications. - Startup Issue – if "No" then the Scenario can or should be used, "No" represents that the issue is not at startup. If "Yes/No" it means the Scenario is useful for troubleshooting startup issues. - Intermittent – if "No" then the Scenario can or should be used, "No" means the issue is not intermittent or that it can be reproduced. If "Yes/No" it means the Scenario is useful if the issue happens randomly or cannot be reproduced. Meaning that the tool can be set to trigger on a specific event or left running for a specific amount of time until the exception happens. - Scenario – "Profiler" means that the collection of a .NET Profiler Trace would be recommended. "Dump" means that a memory dump would be your best option. If both are provided, then both can be useful when the given symptoms and system codes are present. You might find the videos in Table 2 useful which instruct you how to collect and analyze a memory dump or .NET Profiler Trace. ProductStackHostingSymptomCaptureAnalyzeScenarioApp ServiceWindowsinHigh CPUlinklinkDumpApp ServiceWindowsinHigh MemorylinklinkDumpApp ServiceWindowsinTerminatelinklinkDumpApp ServiceWindowsinHanglinklinkDumpApp ServiceWindowsoutHigh CPUlinklinkDumpApp ServiceWindowsoutHigh MemorylinklinkDumpApp ServiceWindowsoutTerminatelinklinkDumpApp ServiceWindowsoutHanglinklinkDumpApp ServiceWindowsinHigh CPUlinklinkDumpFunction AppWindowsinHigh MemorylinklinkDumpFunction AppWindowsinTerminatelinklinkDumpFunction AppWindowsinHanglinklinkDumpFunction AppWindowsoutHigh CPUlinklinkDumpFunction AppWindowsoutHigh MemorylinklinkDumpFunction AppWindowsoutTerminatelinklinkDumpFunction AppWindowsoutHanglinklinkDumpAzure WebJobWindowsinHigh CPUlinklinkDumpApp ServiceWindowsinHigh CPUlinklink.NET ProfilerApp ServiceWindowsinHanglinklink.NET ProfilerApp ServiceWindowsinExceptionlinklink.NET ProfilerApp ServiceWindowsoutHigh CPUlinklink.NET ProfilerApp ServiceWindowsoutHanglinklink.NET ProfilerApp ServiceWindowsoutExceptionlinklink.NET Profiler Table 2, short video instructions on capturing and analyzing dumps and profiler traces Here are a few other helpful videos for troubleshooting Azure App Service Availability and Performance issues: View Application EventLogs Azure App Service Add Application Insights To Azure App Service Prior to capturing and analyzing memory dumps, consider viewing this short video: Setting up WinDbg to analyze Managed code memory dumps and this blog post titled: Capture memory dumps on the Azure App Service platform. Question & Answers - Q: What are the prerequisites for using the .NET Profiler Trace feature in Azure App Service? A: To use the .NET Profiler Trace feature in Azure App Service, the application must be running on a Standard or higher Stock Keeping Unit (SKU) with the Always On setting enabled. If these conditions are not met, the tool will not function, and error messages will be displayed indicating the need for these configurations. - Q: How can you extend the default collection time for a .NET Profiler Trace beyond 60 seconds? A: The default collection time for a .NET Profiler Trace is 60 seconds, but it can be extended up to 15 minutes (900 seconds) by adding an application setting named IIS_PROFILING_TIMEOUT_IN_SECONDS with a value of up to 900. This allows for a longer duration to capture the necessary data for analysis. - Q: When should you use a .NET Profiler Trace instead of a memory dump for diagnosing performance issues in an ASP.NET application? A: A .NET Profiler Trace is recommended for diagnosing performance issues where requests take between 500 milliseconds to 2.5 seconds or less than 60 seconds. It is also useful for identifying high CPU usage causing app downtime. In contrast, a memory dump is more suitable for scenarios where requests take longer than 60 seconds, the application hangs or deadlocks, or there are issues related to high memory usage or app crashes due to fatal exceptions. Keywords Microsoft Azure, Azure App Service, .NET Profiler Trace, ASP.NET performance, Azure debugging tools, .NET performance issues, Azure diagnostic tools, Collect .NET Profiler Trace, Analyze .NET Profiler Trace, Azure portal, Performance troubleshooting, ASP.NET application, Slow ASP.NET app, Azure Standard SKU, Always On setting, Memory dump vs profiler trace, Perf View analysis, Azure performance diagnostics, .NET application profiling, Diagnose ASP.NET slowness, Azure app performance, High CPU usage ASP.NET, Azure app diagnostics, .NET Profiler configuration, Azure app service performance View the full article
  16. Hi Microsoft Team, I’d like to request the addition of key missing features in the PowerPoint JavaScript API that would greatly enhance its capabilities. Unlike Word and Excel, PowerPoint currently lacks methods to: Programmatically add hyperlinks to text and images.Insert and retrieve comments on elements.Add or modify slide notes to improve collaboration.Set alternative text for images to enhance accessibility.These features are crucial for building advanced and accessible add-ins. They are already available in Word and Excel but are missing in PowerPoint, limiting automation and interactivity. Implementing these functionalities would bring PowerPoint to feature parity with other Office applications and unlock new possibilities for developers. I hope Microsoft considers these improvements. They would make a huge difference for those of us developing PowerPoint add-ins. Thanks for your time! Best, Mariangeles Codispoto View the full article
  17. We use basic public folder calendars for a number of things. These were originally hosted on on-prem Exchange, and are now on Exchange Online. I normally create them in Outlook, and select 'calendar items' from the list of types. However, I'm currently testing New Outlook, and that doesn't appear to have any ability to create public folders (you can only add existing ones to the favourites list). So I've looked in Exchange Online, and although that can create public folders, it seems to only create basic empty public folders, with no ability to select calendar items to make a calender. Am I missing something obvious, or is it actually impossible to create public folder calendars if using a New Outlook / Exchange Online setup? Thanks View the full article
  18. Here's a quick run-down of the Cost Management updates for February 2025: Cost details datasets now include AccountId and InvoiceSectionId columns to support more cost allocation scenarios. Note: These columns are already available in FOCUS exports. Copilot is now one click away from the Cost Management overview with new sample prompts that can help you get started with Copilot for Azure. Learn about the FinOps Open Cost and Usage Specification with the Learning FOCUS blog series. New ways to save money with Microsoft Cloud: Generally available: Changes to instance size flexibility ratios for Azure Reserved Virtual Machine Instances for M-series. Generally available: Azure NetApp Files now supports minimum volume size of 50 GiB. Public preview: Reduce costs with Hibernation in Azure DevTest Labs. Public preview: Troubleshoot disk performance using Microsoft Copilot in Azure. Public preview: Azure Monitor integrates performance diagnostics for enhanced VM troubleshooting. Public preview: Introducing the new AKS Monitoring Experience—Unified Insights at your fingertips. Documentation updates for Cost Management API modernization, programmatically creating MCA subscriptions, and more. This is just a quick summary. For the full details, please see Microsoft Cost Management updates—February 2025. View the full article
  19. Does anyone know how I can prevent random old email accounts from popping up when I want to share something from my desktop? View the full article
  20. Blue screens, will anyone know what these dumps are about? Maybe it's a driver...? My laptop has cooling problems because the copper tube is damaged, however when I play something as simple as minesweeper, windows throws blue screens when closing a program. View the full article
  21. Users is now able to reduce the main Teams window, the chat window, and the meeting stage to 360px wide or 502px wide, compared to the current minimum supported dimensions of 720px wide, with no loss of functionality. #Teams #MicrosoftTeams #Productivity #MPVbuzz #Microsoft365 View the full article
  22. 📢 Microsoft Lists Forms New Features 🌟 Experience a new way to create forms directly from Lists home, SharePoint, and the Lists app in Microsoft Teams. This streamlined process automatically generates the underlying list for responses, saving you time and effort. With conditional branching, you can show or hide questions based on previous answers, ensuring respondents only see relevant questions. You can branch questions to other questions or based on choices in a Choice field. Enhance your forms with a relevant logo to make them look more professional and reinforce your brand identity. Support for additional field types includes Attachments, Image, Location, and Lookup. Note that for Lookup fields, respondents need at least read access to the underlying source list to see and select options. Stay informed with notifications for new responses and schedule specific start and end dates for your forms. #Lists #MicrosoftLists #Productivity #MPVbuzz #Microsoft365 #Forms #ListsForms #MicrosoftForms View the full article
  23. Why is the process for deleting an email in Outlook like this: delete from "Inbox", then delete from "Deleted Items", and then still need to delete from "Recover Items"? View the full article
  24. Hello, I tried to turn my PC on and it just kept turning on and off without showing any display, this is an old PC that's been working fine. I tried to access the BiOS but it wouldn't let me. I saw online that taking the ram out and putting it back in solves the issue. I tried that, the machine stayed on but the display was not showing up. Somehow it reverted back to the boot loop. As I was looking for the CMOS battery (which I still cannot find), I saw that there is a red light on my motherboard, the light is for VGA. Is my PC toast? View the full article
  25. My Windows Clock freezes often and I fix it by restarting the file explorer from the task manager, the problem is that is is happening 5 times per week. Is there a way to solve the problem forever? View the full article
×
×
  • Create New...