Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5709
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Good morning, Copilot community! Referencing the Microsoft Answers post "Why is Copilot Pro not allowing files larger than 1 mb to be uploaded", is there an update about if or when the 1MB file attachment limitation will be lifted for organizational Microsoft365 licensed users of Copilot? Thank you View the full article
  2. When copying task in Microsoft Planner, why does the copy task confirmation no longer include a link to the newly created task anymore? Having a link to the newly created task allowed users to easily open the new task to make edits. This was extremely helpful when a templated task is used and copied to create task. View the full article
  3. Note: We apologize for the current viewing experience of these blogs on non-mobile devices. We are working to resolve this issue as soon as possible. Hello again all, Chris Cartwright here from the Directory Services support team. Recently, we released the plan to remove DES as an encryption type for Kerberos completely. We also released identification scripts to assist with this at microsoft/Kerberos-Crypto: Tools and information regarding Windows Kerberos cryptography. I wanted to provide a brief update for XML filtering that was illustrated in the previous blog post, So, you think you’re ready for enforcing AES for Kerberos?. I will reference this blog post quite a bit. While I don’t expect readers of this blog to be using DES, I still wanted to make sure that the information was out there. Additionally, there was another change to auditing events that will be covered in another blog post. The XML here is also modified to support that. XML Filters Here are the XML filters you can leverage to find specific events. Hunting down DES tickets issued <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='TicketEncryptionType']='0x1']] </Select> </Query> <Query Id="1" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='TicketEncryptionType']='0x2']] </Select> </Query> <Query Id="2" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='TicketEncryptionType']='0x3']] </Select> </Query> </QueryList> Hunting down only legacy keys available: There will be more information on this in a later blog post. <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='AccountAvailableKeys']='RC4, DES']] </Select> </Query> <Query Id="1" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='ServiceAvailableKeys']='RC4, DES']] </Select> </Query> <Query Id="3" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='DCAvailableKeys']='RC4, DES']] </Select> </Query> <Query Id="4" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='AccountAvailableKeys']='RC4']] </Select> </Query> <Query Id="5" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='ServiceAvailableKeys']='RC4']] </Select> </Query> <Query Id="6" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='DCAvailableKeys']='RC4']] </Select> </Query> </QueryList> Hunting down RC4 Tickets issued: <QueryList> <Query Id="0" Path="Security"> <Select Path="Security"> *[EventData[Data[@Name='TicketEncryptionType']='0x17']] </Select> </Query> </QueryList> Custom Event Forwarder targets If you choose to, you can leverage this XML file (or create your own) for Event forwarding described in the previous blog and get this for targets: Manifest text <?xml version="1.0"?> <instrumentationManifest xsi:schemaLocation="http://schemas.microsoft.com/win/2004/08/events eventman.xsd" xmlns="http://schemas.microsoft.com/win/2004/08/events" xmlns:win="http://manifests.microsoft.com/win/2004/08/windows/events" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns:xs="http://www.w3.org/2001/XMLSchema" xmlns:trace="http://schemas.microsoft.com/win/2004/08/events/trace"> <instrumentation> <events> <provider name="WEC-Legacy Hunter" guid="{8D8635E8-3573-49B6-A5CE-A91601E1B5D9}" symbol="EvtFwdLegHunt" resourceFileName="C:\Windows\system32\Legacy-Hunter-WEC.dll" messageFileName="C:\Windows\system32\Legacy-Hunter-WEC.dll"> <channels> <channel name="RC4 Keys Only" chid="RC4 Keys Only" symbol="RC4KeysOnly" type="Operational" enabled="true" message="$(string.WEC-Legacy-Hunter.channel.RC4KeysOnly.message)"></channel> <channel name="RC4 Used" chid="RC4 Used" symbol="RC4Used" type="Operational" enabled="true" message="$(string.WEC-Legacy-Hunter.channel.RC4Used.message)"></channel> <channel name="DES Used" chid="DES Used" symbol="DESUsed" type="Operational" enabled="true" message="$(string.WEC-Legacy-Hunter.channel.DESUsed.message)"></channel> </channels> </provider> </events> </instrumentation> <localization> <resources culture="en-US"> <stringTable> <string id="WEC-Legacy-Hunter.channel.RC4Used.message" value="RC4 Ticket issued"></string> <string id="WEC-Legacy-Hunter.channel.RC4KeysOnly.message" value="RC4 Keys Only"></string> <string id="WEC-Legacy-Hunter.channel.DESUsed.message" value="DES Ticket issued"></string> </stringTable> </resources> </localization> </instrumentationManifest> Visual Studio Previous steps for configuring Visual Studio are in the previous blog post referred to earlier. In order to get the WEC-Legacy-Hunter event logs as shown above, create a New Windows Desktop Wizard Project Click Create, and choose Dynamic Link Library as Application type. Make sure Empty Project is checked. Right click on the right side and choose Add Existing Item Select the .rc and .h files. You should see the files showing in the project as shown below: On the top Menu bar, select Project->Properties, and set /NOENTRY under Linker\Advanced. Then, in the top Menu bar, click Build->Build Solution. In your project folder, there will be a dll file under .\x64\Debug. You can leverage the steps from the previous blog to install the manifest and point each subscription to the intended destination event log like so: See previous blog for more details on configuring Event Forwarding. Once again, good hunting! View the full article
  4. Hi, I was working with a web application where i have already implemented methods to set the authorization cookie in the startup of the project. Recently I came across implementing a power BI dashboard in my application Since the power BI has some strict content security policies which only allows the Microsoft related domains like teams and share point to implement the reports ,it blocked the report being rendered in my app thus i switched to the embed API of power Bi, While configuring the power BI settings in the startup The power BI also configured some Auth Cookies .Since my application already implements the auth cookies it conflicted with the existing implementation causing Build errors. I Tried setting the cookies in the power BI configuration and then added the required cookie settings of my application to the cookie being set in the power BI configuration . But it still shows no Use ,My application wont recognize the claims and cookies being set in the power BI configuration thus throwing error while logging into the Application View the full article
  5. When I want to rightclick on the inbox under Shared with me in New Outlook the pane doesn't show up - so I cannot pin it to Favorites. There is simply nothing, just a very short shadow of the pane. I tried to delete and reinstall the new Outlook app but to no avail. Does anyone have the solution her, I would highly appreciate your help. Kind regards, Astrid View the full article
  6. I'm trying to import a JSON list file to my Sharepoint online website. But it seems impossible to import it directly. Does anyone know a possibility via pnp because I saw a method on pnp named "fromJson" but I don't know how to use it to import my files to my SharePoint website. Thanks in advance! View the full article
  7. In today’s fast-paced world of AI applications, optimizing performance should be one of your top priorities. This guide walks you through a simple yet powerful way to reduce OpenAI embedding response sizes by 75%—cutting them from 32 KB to just 8 KB per request. By switching from float32 to base64 encoding in your Retrieval-Augmented Generation (RAG) system, you can achieve a 4x efficiency boost, minimizing network overhead, saving costs and dramatically improving responsiveness. Let's consider the following scenario. Use Case: RAG Application Processing a 10-Page PDF A user interacts with a RAG-powered application that processes a 10-page PDF and uses OpenAI embedding models to make the document searchable from an LLM. The goal is to show how optimizing embedding response size impacts overall system performance. Step 1: Embedding Creation from the 10-Page PDF In a typical RAG system, the first step is to embed documents (in this case, a 10-page PDF) to store meaningful vectors that will later be retrieved for answering queries. The PDF is split into chunks. In our example, each chunk contains approximately 100 tokens (for the sake of simplicity), but the recommended chunk size varies based on the language and the embedding model. Assumptions for the PDF: - A 10-page PDF has approximately 3325 tokens (about 300 tokens per page). - You’ll split this document into 34 chunks (each containing 100 tokens). - Each chunk will then be sent to the embedding OpenAI API for processing. Step 2: The User Interacts with the RAG Application Once the embeddings for the PDF are created, the user interacts with the RAG application, querying it multiple times. Each query is processed by retrieving the most relevant pieces of the document using the previously created embeddings. For simplicity, let’s assume: - The user sends 10 queries, each containing 200 tokens. - Each query requires 2 embedding requests (since the query is split into 100-token chunks for embedding). - After embedding the query, the system performs retrieval and returns the most relevant documents (the RAG response). Embedding Response Size The OpenAI Embeddings models take an input of tokens (the text to embed) and return a list of numbers called a vector. This list of numbers represents the “embedding” of the input in the model so that it can be compared with another vector to measure similarity. In RAG, we use embedding models to quickly search for relevant data in a vector database. By default, embeddings are serialized as an array of floating-point values in a JSON document so each response from the embedding API is relatively large. The array values are 32-bit floating point numbers, or float32. Each float32 value occupies 4 bytes, and the embedding vector returned by models like OpenAI’s text-embedding-ada-002 typically consists of 1536-dimensional vectors. The challenge is the size of the embedding response: - Each response consists of 1536 float32 values (one per dimension). - 1536 float32 values result in 6144 bytes (1536 × 4 bytes). - When serialized as UTF-8 for transmission over the network, this results in approximately 32 KB per response due to additional serialization overhead (like delimiters). Optimizing Embedding Response Size One approach to optimize the embedding response size is to serialize the embedding as base64. This encoding reduces the overall size by compressing the data, while maintaining the integrity of the embedding information. This leads to a significant reduction in the size of the embedding response. With base64-encoded embeddings, the response size reduces from 32 KB to approximately 8 KB, as demonstrated below: base64 vs float32 Min (Bytes) Max (Bytes) Mean (Bytes) Min (+) Max (+) Mean (+) 100 tokens embeddings: text-embedding-3-small 32673.000 32751.000 32703.800 8192.000 (4.0x) (74.9%) 8192.000 (4.0x) (75.0%) 8192.000 (4.0x) (74.9%) 100 tokens embeddings: text-embedding-3-large 65757.000 65893.000 65810.200 16384.000 (4.0x) (75.1%) 16384.000 (4.0x) (75.1%) 16384.000 (4.0x) (75.1%) 100 tokens embeddings: text-embedding-ada-002 32882.000 32939.000 32909.000 8192.000 (4.0x) (75.1%) 8192.000 (4.0x) (75.2%) 8192.000 (4.0x) (75.1%) The source code of this benchmark can be found at: https://github.com/manekinekko/rich-bench-node (kudos to Anthony Shaw for creating the rich-bench python runner) Comparing the Two Scenarios Let’s break down and compare the total performance of the system in two scenarios: Scenario 1: Embeddings Serialized as float32 (32 KB per Response) Scenario 2: Embeddings Serialized as base64 (8 KB per Response) Scenario 1: Embeddings Serialized as Float32 In this scenario, the PDF embedding creation and user queries involve larger responses due to float32 serialization. Let’s compute the total response size for each phase: 1. Embedding Creation for the PDF: - 34 embedding requests (one per 100-token chunk). - 34 responses with 32 KB each. Total size for PDF embedding responses: 34 × 32 KB = 1088 KB = 1.088 MB 2. User Interactions with the RAG App: - Each user query consists of 200 tokens (which is split into 2 chunks of 100 tokens). - 10 user queries, requiring 2 embedding responses per query (for 2 chunks). - Each embedding response is 32 KB. Total size for user queries: Embedding responses: 20 × 32 KB = 640 KB. RAG responses: 10 × 32 KB = 320 KB. Total size for user interactions: 640 KB (embedding) + 320 KB (RAG) = 960 KB. 3. Total Size: Total size for embedding responses (PDF + user queries): 1088 KB + 640 KB = 1.728 MB Total size for RAG responses: 320 KB. Overall total size for all 10 responses: 1728 KB + 320 KB = 2048 KB = 2 MB Scenario 2: Embeddings Serialized as Base64 In this optimized scenario, the embedding response size is reduced to 8 KB by using base64 encoding. 1. Embedding Creation for the PDF: - 34 embedding requests. - 34 responses with 8 KB each. Total size for PDF embedding responses: 34 × 8 KB = 272 KB. 2. User Interactions with the RAG App: - Embedding responses for 10 queries, 2 responses per query. - Each embedding response is 8 KB. Total size for user queries: Embedding responses: 20 × 8 KB = 160 KB. RAG responses: 10 × 8 KB = 80 KB. Total size for user interactions: 160 KB (embedding) + 80 KB (RAG) = 240 KB 3. Total Size (Optimized Scenario): Total size for embedding responses (PDF + user queries): 272 KB + 160 KB = 432 KB. Total size for RAG responses: 80 KB. Overall total size for all 10 responses: 432 KB + 80 KB = 512 KB Performance Gain: Comparison Between Scenarios The optimized scenario (base64 encoding) is 4 times smaller than the original (float32 encoding): 2048 / 512 = 4 times smaller. The total size reduction between the two scenarios is: 2048 KB - 512 KB = 1536 KB = 1.536 MB. And the reduction in data size is: (1536 / 2048) × 100 = 75% reduction. How to Configure base64 encoding format When getting a vector representation of a given input that can be easily consumed by machine learning models and algorithms, as a developer, you usually call either the OpenAI API endpoint directly or use one of the official libraries for your programming language. Calling the OpenAI or Azure OpenAI APIs Using OpenAI endpoint: curl -X POST "https://api.openai.com/v1/embeddings" \ -H "Content-Type: application/json" \ -H "Authorization: Bearer YOUR_API_KEY" \ -d '{ "input": "The five boxing wizards jump quickly", "model": "text-embedding-ada-002", "encoding_format": "base64" }' Or, calling Azure OpenAI resources: curl -X POST "https://{endpoint}/openai/deployments/{deployment-id}/embeddings?api-version=2024-10-21" \ -H "Content-Type: application/json" \ -H "api-key: YOUR_API_KEY" \ -d '{ "input": ["The five boxing wizards jump quickly"], "encoding_format": "base64" }' Using OpenAI Libraries JavaScript/TypeScript const response = await client.embeddings.create({ input: "The five boxing wizards jump quickly", model: "text-embedding-3-small", encoding_format: "base64" }); A pull request has been sent to the openai SDK for Node.js repository to make base64 the default encoding when/if the user does not provide an encoding. Please feel free to give that PR a thumb up. Python embedding = client.embeddings.create( input="The five boxing wizards jump quickly", model="text-embedding-3-small", encoding_format="base64" ) NB: from 1.62 the openai SDK for Python will default to base64. Java EmbeddingCreateParams embeddingCreateParams = EmbeddingCreateParams .builder() .input("The five boxing wizards jump quickly") .encodingFormat(EncodingFormat.BASE64) .model("text-embedding-3-small") .build(); .NET The openai-dotnet library is already enforcing the base64 encoding, and does not allow setting encoding_format by the user (see). Conclusion By optimizing the embedding response serialization from float32 to base64, you achieved a 75% reduction in data size and improved performance by 4x. This reduction significantly enhances the efficiency of your RAG application, especially when processing large documents like PDFs and handling multiple user queries. For 1 million users sending 1,000 requests per month, the total size saved would be approximately 22.9 TB per month simply by using base64 encoded embeddings. As demonstrated, optimizing the size of the API responses is not only crucial for reducing network overhead but also for improving the overall responsiveness of your application. In a world where efficiency and scalability are key to delivering robust AI-powered solutions, this optimization can make a substantial difference in both performance and user experience. ■ Shoutout to my colleague Anthony Shaw for the the long and great discussions we had about embedding optimisations. View the full article
  8. Azure Container Apps provides a seamless way to build, deploy, and scale cloud-native applications without the complexity of managing infrastructure. Whether you’re developing microservices, APIs, or AI-powered applications, this fully managed service enables you to focus on writing code while Azure handles scalability, networking, and deployments. In this blog post, we explore five essential aspects of Azure Container Apps—each highlighted in a one-minute video. From intelligent applications and secure networking to effortless deployments and rollbacks, these insights will help you maximize the capabilities of serverless containers on Azure. Azure Container Apps - in 1 Minute Azure Container Apps is a fully managed platform designed for cloud-native applications, providing effortless deployment and scaling. It eliminates infrastructure complexity, letting developers focus on writing code while Azure automatically handles scaling based on demand. Whether running APIs, event-driven applications, or microservices, Azure Container Apps ensures high performance and flexibility with minimal operational overhead. Watch the video on YouTube Intelligent Apps with Azure Container Apps – in 1 Minute Azure Container Apps, Azure OpenAI, and Azure AI Search make it possible to build intelligent applications with Retrieval-Augmented Generation (RAG). Your app can call Azure OpenAI in real-time to generate and interpret data, while Azure AI Search retrieves relevant information, enhancing responses with up-to-date context. For advanced scenarios, AI models can execute live code via Azure Container Apps, and GPU-powered instances support fine-tuning and inferencing at scale. This seamless integration enables AI-driven applications to deliver dynamic, context-aware functionality with ease. Watch the video on YouTube Networking for Azure Container Apps: VNETs, Security Simplified – in 1 Minute Azure Container Apps provides built-in networking features, including support for Virtual Networks (VNETs) to control service-to-service communication. Secure internal traffic while exposing public endpoints with custom domain names and free certificates. Fine-tuned ingress and egress controls ensure that only the right traffic gets through, maintaining a balance between security and accessibility. Service discovery is automatic, making inter-app communication seamless within your Azure Container Apps environment. Watch the video on YouTube Azure Continuous Deployment and Observability with Azure Container Apps - in 1 Minute Azure Container Apps simplifies continuous deployment with built-in integrations for GitHub Actions and Azure DevOps pipelines. Every code change triggers a revision, ensuring smooth rollouts with zero downtime. Observability is fully integrated via Azure Monitor, Log Streaming, and the Container Console, allowing you to track performance, debug live issues, and maintain real-time visibility into your app’s health—all without interrupting operations. Watch the video on YouTube Effortless Rollbacks and Deployments with Azure Container Apps – in 1 Minute With Azure Container Apps, every deployment creates a new revision, allowing multiple versions to run simultaneously. This enables safe, real-time testing of updates without disrupting production. Rolling back is instant—just select a previous revision and restore your app effortlessly. This powerful revision control system ensures that deployments remain flexible, reliable, and low-risk. Watch the video on YouTube Watch the Full Playlist For a complete overview of Azure Container Apps capabilities, watch the full JavaScript on Azure Container Apps YouTube Playlist Create Your Own AI-Powered Video Content Inspired by these short-form technical videos? You can create your own AI-generated videos using Azure AI to automate scriptwriting and voiceovers. Whether you’re a content creator, or business looking to showcase technical concepts, Azure AI makes it easy to generate professional-looking explainer content. Learn how to create engaging short videos with Azure AI by following our open-source AI Video Playbook. Conclusion Azure Container Apps is designed to simplify modern application development by providing a fully managed, serverless container environment. Whether you need to scale microservices, integrate AI capabilities, enhance security with VNETs, or streamline CI/CD workflows, Azure Container Apps offers a comprehensive solution. By leveraging its built-in features such as automatic scaling, revision-based rollbacks, and deep observability, developers can deploy and manage applications with confidence. These one-minute videos provide a quick technical overview of how Azure Container Apps empowers you to build scalable, resilient applications with ease. FREE Content Check out our other FREE content to learn more about Azure services and Generative AI: Generative AI for Beginners - A JavaScript Adventure! Learn more about Azure AI Agent Service LlamaIndex on Azure JavaScript on Azure Container Apps JavaScript at Microsoft View the full article
  9. Hello Video Community, a video cannot be played. Deleting cookies does not help. According to the developer tool F12 in Chrome, error messages come from suspected CDN's. GET 400 Bad Request https://westeurope1-mediap.svc.ms/transform/videotranscode/8889cff9eb011943565d1b48801ea96ac9a... [Report Only] Refused to frame 'https://login.microsoftonline.com/' because it violates the following Content Security Policy directive: “frame-src ‘self’ https://support.office.com https://webshell.suite.office.com/ *.cloud.microsoft”. This video runs a little better in Microsoft Edge, but the error message still appears at some point. Is there a solution here? View the full article
  10. Webinar Registration: HERE Join us for an exciting webinar to celebrate the launch of Copilot Analytics for Agents. This session will explore how Copilot Analytics can transform your business by measuring AI's impact on productivity and ROI. We'll cover: Latest Research Findings: Discover the newest insights on how AI is transforming businesses and discuss the value of agents. Overview of Copilot Analytics: Learn how Copilot Analytics measures AI's influence on business operations. Introduction to Agentic Reports: Get a first look at our new agentic reports and learn how they can be used to gain valuable insights. Don't miss the opportunity to engage with experts during our session and gain valuable insights to optimize your AI investments. We look forward to your participation! View the full article
  11. Webinar Registration: HERE Join us for an early look at the Employee Self-Service Agent, a new set of capabilities in M365 Copilot to answer policy questions and complete workplace services tasks, starting with HR and IT. Our vision is to power new self-service workflows across the organization, starting with a focus on HR service delivery and expanding to additional areas like IT, facilities and more. This webinar is designed for leaders in IT and HR who are looking to improve their digital employee experience. In this private webinar you'll hear leaders in Microsoft HR and Product teams discuss the following: How Microsoft HR uses AI to improve the employee experience The current challenges of HR and IT service delivery and how generative AI can transform this space An overview of the Employee Self-Service Agent Product demos, feedback and roadmap suggestions Opportunities to stay engaged and participate in private preview programs View the full article
  12. Webinar Registration: HERE Getting ready to implement Microsoft Employee Self-Service Agent? Join us for a step-by-step guide on how to prepare your organization for a smooth and successful rollout. Learn the key actions to take before implementation and best practices for readiness. Don’t miss this essential session—register now! View the full article
  13. Hi I am trying to get the total of date selected to match dietary, can anyone advise what is the formula to use? file as attached. Thank you View the full article
  14. Introduction We've received numerous queries about WordPress on App Service, and we love it! Your feedback helps us improve our offerings. A common theme is the challenges faced with non-managed WordPress setups. Our managed WordPress offering on App Service is designed to be highly performant, secure, and seamlessly integrated with Azure services like MySQL flexible server, CDN/Front Door, Blob Storage, VNET, and Azure Communication Services. While some specific cases might require a custom WordPress setup, most users benefit significantly from our managed service, enjoying better performance, security, easier management, and cost savings. In this article, we'll explore how to identify if you're using the managed offering and how to transition if you're not. In this article, we will learn about how we can know if we are using the managed offering and how we can move to the managed offering if we are not. Why Choose Managed WordPress on App Service? Under the Hood Optimized Container Image: We use a container image with numerous optimizations. Learn more: https://github.com/Azure/wordpress-linux-appservice Environment Variables: These configure WordPress and integrate various Azure resources. Learn more: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_application_settings.md Azure resources: We integrate multiple Azure resources like App Service, MySQL flexible database, Entra ID, VNET, ACS Email, CDN/Front Door, and Blob storage, all configured via environment variables. Also, the resources individually are configured to best work with WordPress. Benefits of Managed Offering Managed Tech Stack: Our team handles updates for PHP, Nginx, WordPress, etc., ensuring you're always on the latest versions without performance or security concerns. Read more: https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-keep-your-wordpress-website-stack-on-azure-app-service-up-to-date/3832193 Managed MySQL Instance: We use Azure Database for MySQL flexible server as the WordPress database. Many customers use in-App databases, which increase maintenance costs and require manual configurations. Our managed MySQL instance is optimized (server parameters) for performance and security, and you don't need to worry about upgrades. Azure Service Integrations: Our managed offering integrates seamlessly with Azure services like CDN, Front Door, Entra ID, VNET, and Communication Services for Email. These integrations are important for enhancing the WordPress experience. For example, without ACS Email, WordPress can't send emails, affecting tasks like password resets and user invitations. We handle these integrations through environment variables, simplifying the setup. Learn more: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_application_settings.md Simplified creation: Creating WordPress site involves configuring various resources, which can be complex. Our managed service simplifies this process. See how to create a WordPress site: https://learn.microsoft.com/en-us/azure/app-service/quickstart-wordpress Simplified management: Managing multiple resources can be complex. We manage this by environment variables. We extend this capability to complex WordPress configurations as well. For example, WordPress multisite: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_multisite_installation.md Security: We provide best in class security – like use of managed identities: https://techcommunity.microsoft.com/blog/appsonazureblog/managed-identity-support-for-wordpress-on-app-service/4241435. We ensure all resources are within a VNET and privde phpMyAdmin for database management: https://github.com/Azure/wordpress-linux-appservice/blob/main/WordPress/wordpress_phpmyadmin.md Performance improvements: We have optimized performance with the W3TC plugin, local storage caching, and efficient use of caching, content delivery, and storage. Others: There are a bunch of other interesting features that you might be interested in: https://learn.microsoft.com/en-us/azure/app-service/overview-wordpress https://learn.microsoft.com/en-us/azure/app-service/wordpress-faq How to Check if You're Using the Managed WordPress on App Service? To determine if you're using the managed offering, follow these steps: Check the Container Image: Go to the App Service overview page in the Azure portal. Look for the "Container image" in the properties tab. If the image matches one of our supported images(https://github.com/Azure/wordpress-linux-appservice), you're likely using the managed service. If not, you'll need to migrate to the managed offering, which we'll cover later. Fig 1.1: Managed WordPress - Container imageFig 1.2: Non-Managed WordPress - No container image 2. Verify Environment Variables: Access the Kudo console and navigate to the File manager. Open the /home/site/wwwroot/wp-config.php file and check if it uses the environment variables correctly. Fig 2.1: Managed WordPress – use of genenv() function for database credentialsFig 2.2: Non-Managed WordPress – hardcoded DB credentials 3. Check Deployment Status: In the File manager, locate the /home/wp-locks/wp_deployment_status.txt file WARNING: Do not edit this file as it may cause unintended issues. Simply check the entries. If the file is missing or its contents differ from the expected entries, you're using a non-managed WordPress site. If the file is present and the contents match, you're on the managed offering. Fig 3.1: Managed WordPress – wp_deployment_status.txt has entriesFig 3.2: Non-Managed WordPress – No /home/wp-locks folder How to transition to the managed offering? Transitioning to the managed WordPress on App Service can be done in two ways: Highly Recommended Approach: Follow these steps: Create a new managed WordPress site: Follow the steps in this setup guide to create a new managed WordPress site on App Service. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-set-up-a-new-wordpress-website-on-azure-app-service/3729150 Migrate Content Using All-in-One Migration Plugin: Use the All-in-One Migration plugin to transfer your content from the source site to the new managed site. This migration guide provides detailed instructions. Although it’s tailored for migrating from WP Engine, the steps are applicable to this scenario as well. Simply skip the WP Engine-specific steps. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-migrate-from-wp-engine-to-wordpress-on-app-service/4259573 Although this article instructs on how to migrate from WP Engine, this will work perfectly well for this migration scenario as well. You can just omit the WP Engine specific steps mentioned and you are good to go. Point Your Custom Domain to the New Site: Update your custom domain to point to the new managed WordPress site. Follow the instructions in this custom domain guide. https://techcommunity.microsoft.com/blog/appsonazureblog/how-to-use-custom-domains-with-wordpress-on-app-service/3886247 Not Recommended Approach: Some customers ask if they can simply apply the managed container image, add environment variables, and create the necessary resources manually. While this is technically possible, it often leads to numerous errors and involves many steps. If any step goes wrong, you might not achieve the desired outcome and could potentially break your existing site. The recommended approach ensures your existing site remains safe and intact until the new site is fully operational. We hope you transition to the managed WordPress on App Service and enjoy the best WordPress experience! Support and Feedback We’re here to help! If you need any assistance, feel free to open a support request through the Microsoft Azure portal. New support request - Microsoft Azure For more details about our offering, check out the announcement on the General Availability of WordPress on Azure App Service in the Microsoft Tech Community. Announcing the General Availability of WordPress on Azure App Service - Microsoft Tech Community. We value your feedback and ideas on how we can improve WordPress on Azure App Service. Share your thoughts and suggestions on our Community page Post idea · Community (azure.com) or report any issues on our GitHub repository Issues · Azure/wordpress-linux-appservice (github.com). Alternatively, you can start a conversation with us by emailing wordpressonazure@microsoft.com. View the full article
  15. Research Drop in Brief: The percentage of organizations piloting or deploying AI solutions has risen by 20% since 2023. It’s time to focus on strong HR and IT collaboration to drive holistic AI integration and successful workforce transformation. HR is in the best spot to help IT bring employees along on the AI journey. HR needs resourcing to play “catch up” to IT, such as greater access to organization-sponsored AI tools, involvement in training/upskilling, and partaking in cross-functional experimentation. AI continues to be ubiquitous. We see massive growth in AI adoption across industries, with the percentage of organizations piloting or deploying AI solutions up by 20% since 20231. With this exponential growth comes rapid change. New strategies, recommendations, use cases, and best practices are discovered and shared on what feels like a daily basis. In 2025, we are seeing organizations strengthening functional partnerships to help organizational AI transformation succeed. Multiple departments exist in organizations to bring unique skills sets, expertise, and perspectives, and these diverse perspectives should be included when thinking about AI at your organization. To date, IT has been leading the charge for AI transformation, but more and more we see high-performing organizations involving HR in their strategy and implementation. IT brings the tech; HR brings the people. HR and IT are both critical to AI transformation – and shouldn’t be operating in silos. HR is set up to lead the charge in reskilling, upskilling, and talent management in the era of AI, while IT is orchestrating and managing the tools and systems2. A benefit of including HR is that it deepens the connection to employees, increasing the involvement of employees in the transformation and reducing their fear of the unknown. When these functions are aligned, they accelerate AI implementation and workforce integration by deepening adoption, increasing ROI, and strengthening data governance. Our data shows that 73% of HR employees and 82% of IT employees believe AI will transform work for the better3. While this majority is encouraging, what can we learn about HR employees’ AI experience to explain an almost 10 percentage point difference between functions? And how can these functions be better positioned to collaborate and work in tandem? For this month’s Research Drop, we explore the different AI perceptions and experiences between HR and IT and how organizations can better align these critical functions to drive a more holistic AI transformation. IT employees’ advanced AI engagement reflects their central role in organizational technology As AI is inherently a technology, it makes sense that IT employees might be the first functional group to learn about it and work with it. Technology is engrained in their day-to-day work and their identity; 81% of IT employees agree that it’s important for them to be among the first to use new technologies. This natural inclination and excitement for technology innovation places IT in a key position for AI transformation. IT leaders are looking to shift the function of the IT department from building and maintaining to orchestrating and innovating, further expanding its scope to streamline transformation efforts across business facets4. This key role has fast-tracked IT employees’ perceived value of integrating AI at work – 79% of IT employees are excited about a future where everyone uses AI at work. For HR employees, their experience with AI at work is slowly growing, taking a bit more time to catch up with their IT peers. While 68% of IT employees (and 77% of IT leaders) believe that AI in their workplace will boost revenue and financial success, only 55% of HR employees (and 63% of HR leaders) feel the same. As more HR departments get involved with their organization’s AI transformation, we expect this vision to crystallize and more HR use cases and applications to become tangible. For example, the Employee Self-Service Agent in Microsoft 365 Copilot (ESS) enhances HR efficiency and employee satisfaction by streamlining processes, automating routine tasks, reducing support tickets, and providing customizable, user-friendly solutions. Integrated AI solutions such as ESS are changing HR functionality by reducing transactional tasks and creating space to focus more on relational tasks (e.g., mental health support), which are core to HR’s mission5. When planning HR and IT collaboration, focus on the common goals between the groups and how to use their unique perspectives and skillsets to achieve these goals. For example, a shared priority for both groups is data security. When asked about the biggest challenges of AI implementation, 28% of HR leaders mentioned compliance with data protection laws (e.g., HIPAA) and 25% mentioned ethical concerns about AI use6. HR is responsible for protecting and managing employees. When combined with IT’s expertise in security and protection, these challenges remain a priority and are effectively managed throughout large-scale AI rollouts. Another shared bet is skilling. IT is positioned to provide user guides and technical walkthroughs for new technology. HR provides support from a skills perspective, providing deep expertise in learning motivation and efficacy, along with resources for large-scale development programs. By leveraging the unique strengths of both HR and IT, organizations effectively address challenges and drive successful AI transformation. Know, however, that success for this partnership requires equitable access to AI tools and resources for both business units. Access to organization-sponsored AI tools continues to be a differentiator for value realization While the majority of HR and IT employees use AI at least once a week (66% of HR and 75% of IT), organization-sponsored access to AI tools and technologies isn’t equal. When asked how many of the AI tools they use at work are sponsored by their organization, 72% of IT employees reported that their organization provides all or most of the AI tools they use. Only 59% of HR employees reported the same. With half of HR employees having to BYOAI (bring your own AI) to work, this doesn’t allow organizations to capitalize on the benefits of strategic AI adoption at scale7, such as ROI tracking and centralized training programs. HR employees also report seeing less success stories circulated around their function. While 77% of IT employees feel inspired by stories of people successfully using AI at work, only 68% of HR employees feel the same. When employees are given the space and resources to experiment, it fuels a virtuous loop where more experimentation creates more realized value, which in turn leads to more experimentation, etc8. Access is a propellant of adoption and realized value. We researched a set of positive outcomes of AI adoption, called RIVA, or Realized Individual Value of AI. RIVA encapsulates various ways that an employee can see a direct impact of AI use in their day-to-day work. When we break out HR and IT employees with “all or most” of their AI tools provided by their organization versus “some or none” of their AI tools provided by their organization, the difference in RIVA is clear. For both HR and IT employees with high access to organization-sponsored AI tools, more than 75% report all six RIVA outcomes, ranging from stress reduction to faster task completion. When that access is low, reported RIVA drops by up to 17%. Organization-sponsored tools likely come with leadership support, training, scenario libraries, and other resources that help employees capture value sooner. But without those scaled rollout benefits, employees are left on their own to navigate the changing workplace and to not get left behind. To drive a strong collaboration between HR & IT, AI access for both functions should be a foundational step. We see across these groups that while the direct benefits of using AI are easiest to realize (e.g. AI helps complete tasks faster), the more subtle benefits are the hardest to achieve (e.g. AI helps make better decisions or reduces overall work stress). For example, we see high reported task speed improvement for employees even with low access, likely due to BYOAI tools being simple to apply to direct situations. However, for true AI transformation, the goal is to tackle those transformative use cases, where day-to-day no longer looks the same as it did a few years ago (or even last week). The greater the organization-sponsored access, the better the chance of creating impact for both HR & IT employees, which positions them to be a driving force of organization-wide transformation. Lean into HR and IT collaboration to accelerate AI transformation Bringing together HR and IT for AI transformation strengthens the impact and value that your organization gets from investing in AI technologies and tools. Their skillsets are ideal to work in tandem to ensure that the proper systems in place and the workforce is ready to adopt them. We offer three recommendations on how to lean into this partnership: ensure equitable resources, increase experimentation and sharing, and leverage HR to get closer to employees. Ensure equitable cross-functional training and resourcing Training and development are key to learning any new technology. According to the World Economic Forum, only 35% of employees are trained and knowledgeable in AI9. Within our sample, while 73% of HR employees and 80% of IT employees reported that they were adequately trained in AI and understand how to use it in their work, only 22% and 31% strongly agreed to this, respectively. We may see discrepancies in how much training an employee thinks they need, versus how much more they could have when training is invested in and centralized. As IT is front and center in the AI transformation, their educational opportunities are likely the highest. For HR, however, 40% of HR leaders say a lack of resources (e.g., time, money, staff) is the biggest barrier to AI implementation6. This holds HR back from evolving beyond tactical use cases into strategic use cases, where they need investment in AI-based data and technology competences10. With the right resources, HR can take the lead role in a partnership with IT to identify organization-wide skill gaps and training needs. Increase experimentation and sharing between peers, teams, and business units The more opportunities employees have to experiment with AI, the better they get and the more value they see. As we’ve seen, however, some departments are better set up to lean into these processes. Large differences in AI adoption can create in-group/out-group mentalities that drive business silos and create limitations in data and information sharing, scaling AI technology, and cross-functional collaboration11. These are critical components of a successful AI transformation, where AI is optimized throughout the organization. In addition to finding balance in AI opportunities cross-functionally, seek to improve the effectiveness of collaboration and the culture of sharing. Design inclusive, common languages between functional teams that help bridge the gap between tech and non-tech teams11. Create communities or forums where employees across the organization can share quick tips, prompts, or use cases that helped them realize deeper value in AI12. Spin up a HR and IT taskforce dedicated to cross-pollination of resources focused on AI adoption. All these initiatives can help bring your teams closer together. Leverage HR to bring employees closer to and more invested in AI transformation With AI advancements moving quicker than any previous technology at work, it can be overwhelming to keep up and employees may feel this snowball effect of being "prepared enough.” Employees may feel uncertain about how to get involved and upskilled with AI and may feel anxious about their future. HR is uniquely positioned to help employees feel grounded and informed. Organizations at the forefront of AI adoption are 2.5x as likely to have HR involve employees in identifying tasks, roles, and processes suitable for automation13. HR provides a direct line to the employee voice and employee input. Not only can HR directly influence IT’s implementation strategy and priorities but it can strengthen employees’ adoption tendencies. HR and IT can collaborate on measuring AI transformation success through employee technology behaviors and employee sentiment feedback. Bringing these functions together maximizes AI implementation and ROI measurement capabilities. A dynamic collaboration between HR and IT departments drives successful AI transformation. IT's central role in technology and HR's focus on your people creates a powerful synergy, leading to effective AI implementation and workforce integration. By fostering cross-functional training, experimentation, and collaboration, organizations can unlock the full potential of AI, enhancing both employee adoption and realized value of AI. Stay tuned for our April Research Drop to keep up with what the People Science team is learning!  1 MIT Sloan Management Review. (November 11, 2024). Learning to manage uncertainty, with AI. 2 Forbes. (February 11, 2025). IT isn't the new HR, and AI shouldn't be leading your team. 3 Microsoft People Science Research analyzing 413 global employees in HR & IT based on our larger April 2024 AI Readiness Study dataset. Note: participants were asked to respond to questions around “generative artificial intelligence” which has been shortened to “AI” for the sake of this blog. 4 Deloitte. (December 11, 2024). IT, amplified: AI elevates the reach (and remit) of the tech function. 5 Mercer. (2025). Generative AI will transform three key HR roles. 6 SHRM. (January 9, 2025). There's still time to revolutionize HR with AI. 7 Microsoft WorkLab. (May 2024). 2024 Work Trend Index Annual Report. 8 Microsoft People Science. (April 2024). The state of AI change readiness: Accelerating AI transformation through employee experience. 9 World Economic Forum. (January 16, 2025). Unlocking human potential: Building a responsible AI-ready workforce for the future. 10 Forbes. (January 22, 2025). 3 ways HR leaders can look inward to prepare for upheaval in 2025. 11 Harvard Business Review. (May-June 2024). For success with AI, bring everyone on board. 12 Microsoft WorkLab. (February 2025). When it comes to AI, don’t build ‘Island of Intelligence.’ 13 i4cp. (January 23, 2025). Report: Workforce readiness in the era of AI. Both "propellant" and "propellent" refer to a substance used to drive forward or propel an object, such as in rockets, aerosol sprays, and firearms. The primary difference between the two lies in their spelling and usage frequency. "Propellant" is the widely accepted and commonly used term, especially in scientific and technical contexts. On the other hand, "propellent" is considered a less common variant and might be seen in some instances, but it is not the preferred spelling in most authoritative sources. orem ipsum View the full article
  16. I have a question about where to install drivers for my pc, some just install and it's good to go some ask a path. I am trying to install my Wi-Fi driver and it shows a path its going to and under that is shows device name and says (no suitable driver?) and the option to install is grayed out. I don't get it I am trying to install the drivers and it says no suitable driver. system is windows 11 64 pro mobo is MSI z790 tomahawk max also, if anyone know how to do this on windows 10, a friend has this board and there's no support for Wi-Fi driver in windows 10? I mean it's not windows 7 or me. View the full article
  17. I am researching Intel® Core™ Ultra Processors 7 and AMD Ryzen™ AI 7 PRO 350 laptop,desktop Do you have any suggestions from these. View the full article
  18. What is easiest way to play my favorite childhood game on new PCs? I have still the discs. View the full article
  19. Hi. PC: 14900K stock with NZXT kraken elite 360, 2x16GB DDR5 6800,rtx 4090,Seasonic Px 1600,SSD 2TB,Aorus Z790 Elite X. I have an question about crashes with faulting module: nvgpucomp64.dll. When my cpu was not stable ,game Remnant 2 was crashing with faulting module: nvgpucomp64.dll. But i tweak in bios settings and game is stable. But now i have found in event viewer some crashes related to : "SearchHost" or "XboxGamebar etc" and faulting module is too nvgpucomp64.dll. So is my cpu still not stable or just ignore that errors with SearchHost and XboxGamebar. Thx. And why nvgpucomp64.dll the same. View the full article
  20. When i turn my PC on, I get a black screen and a spinning cursor. I have turned my computer off and on over 20 times now. I have tried to uninstall latest quality update and latest feature update in advanced options but still doesn't work. I have tried system restore and thay didn't work either. I have enabled safe mode but I run into a blue screen saying we encountered a problem. Tried to go into BIOS but not sure what to do there. I am lost on what to try next. I did download the latest nvidias graphics card driver or something like that the night before and it was fine and then this morning when I opened my computer I now can't get in. View the full article
  21. So this morning I made the mistake of turning on "get updates before they are out early" well windows 24H2 started downloading and installed. So I was waiting for the usual blue screen. It installed and no blue screen, I did have to reinstall a couple small programs to get get them to work. The create a restore point however didn't work, even going to the create the restore point using windows settings didn't seem to work either. So OK I decided to restart. That's when it all went south. Dell fired up the system hardware check first, I never get that, after it was done it said no errors found. (BTW I also ran sfc and disk before restarting, that also said no issues with the system.) so now I'm back to doing a full system backup recovery if 23h2. The question is, what could be the problem here? If 24H2 installed OK, and wasn't giving me any errors what would make it not boot as usual? View the full article
  22. Why am I seing this message in the bottom corner of my computer. I have been using windows 11 since it came out? Can't think it is a scam. I typed in activate windows on the search bar and got this... View the full article
  23. I bought a brand new laptop and this message appeared when I was digging to know if the windows was activated. It is activated but I think it is a duplicate. View the full article
  24. I need help. Thanks. View the full article
  25. Noticed a new button in taskbar: "focus", it opens this. Have they invented something new about clocks or why does it demand an update? Windows 11 is a joke. View the full article
×
×
  • Create New...