Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5679
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Save the date and join us for our next monthly Windows Office Hours, on March 20th from 8:00-9:00a PT. We will have a broad group of product experts, servicing experts, and engineers representing Windows, Microsoft Intune, Configuration Manager, Windows 365, Windows Autopilot, security, public sector, FastTrack, and more. They will be standing by -- in chat -- to provide guidance, discuss strategies and tactics, and, of course, answer any specific questions you may have. For more details about how Windows Office Hours works, go to our Windows IT Pro Blog. If 8:00 a.m. Pacific Time doesn't work for you, post your questions on the Windows Office Hours: March 20 event page, up to 48 hours in advance. Hope you can join us! View the full article
  2. The OpenAI Agents SDK provides a powerful framework for building intelligent AI assistants with specialised capabilities. In this blog post, I'll demonstrate how to integrate Azure OpenAI Service and Azure API Management (APIM) with the OpenAI Agents SDK to create a banking assistant system with specialised agents. Key Takeaways: Learn how to connect the OpenAI Agents SDK to Azure OpenAI Service Understand the differences between direct Azure OpenAI integration and using Azure API Management Implement tracing with the OpenAI Agents SDK for monitoring and debugging Create a practical banking application with specialized agents and handoff capabilities The OpenAI Agents SDK The OpenAI Agents SDK is a powerful toolkit that enables developers to create AI agents with specialised capabilities, tools, and the ability to work together through handoffs. It's designed to work seamlessly with OpenAI's models, but can be integrated with Azure services for enterprise-grade deployments. Setting Up Your Environment To get started with the OpenAI Agents SDK and Azure, you'll need to install the necessary packages: pip install openai openai-agents python-dotenv You'll also need to set up your environment variables. Create a `.env` file with your Azure OpenAI or APIM credentials: For Direct Azure OpenAI Connection: # .env file for Azure OpenAI AZURE_OPENAI_API_KEY=your_api_key AZURE_OPENAI_API_VERSION=2024-08-01-preview AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ AZURE_OPENAI_DEPLOYMENT=your-deployment-name For Azure API Management (APIM) Connection: # .env file for Azure APIM AZURE_APIM_OPENAI_SUBSCRIPTION_KEY=your_subscription_key AZURE_APIM_OPENAI_API_VERSION=2024-08-01-preview AZURE_APIM_OPENAI_ENDPOINT=https://your-apim-name.azure-api.net/ AZURE_APIM_OPENAI_DEPLOYMENT=your-deployment-name Connecting to Azure OpenAI Service The OpenAI Agents SDK can be integrated with Azure OpenAI Service in two ways: direct connection or through Azure API Management (APIM). Option 1: Direct Azure OpenAI Connection from openai import AsyncAzureOpenAI from agents import set_default_openai_client from dotenv import load_dotenv import os # Load environment variables load_dotenv() # Create OpenAI client using Azure OpenAI openai_client = AsyncAzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version=os.getenv("AZURE_OPENAI_API_VERSION"), azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT") ) # Set the default OpenAI client for the Agents SDK set_default_openai_client(openai_client) Option 2: Azure API Management (APIM) Connection from openai import AsyncAzureOpenAI from agents import set_default_openai_client from dotenv import load_dotenv import os # Load environment variables load_dotenv() # Create OpenAI client using Azure APIM openai_client = AsyncAzureOpenAI( api_key=os.getenv("AZURE_APIM_OPENAI_SUBSCRIPTION_KEY"), # Note: Using subscription key api_version=os.getenv("AZURE_APIM_OPENAI_API_VERSION"), azure_endpoint=os.getenv("AZURE_APIM_OPENAI_ENDPOINT"), azure_deployment=os.getenv("AZURE_APIM_OPENAI_DEPLOYMENT") ) # Set the default OpenAI client for the Agents SDK set_default_openai_client(openai_client) Key Difference: When using Azure API Management, you use a subscription key instead of an API key. This provides an additional layer of management, security, and monitoring for your OpenAI API access. Creating Agents with the OpenAI Agents SDK Once you've set up your Azure OpenAI or APIM connection, you can create agents using the OpenAI Agents SDK: from agents import Agent from openai.types.chat import ChatCompletionMessageParam # Create a banking assistant agent banking_assistant = Agent( name="Banking Assistant", instructions="You are a helpful banking assistant. Be concise and professional.", model="gpt-4o", # This will use the deployment specified in your Azure OpenAI/APIM client tools=[check_account_balance] # A function tool defined elsewhere ) The OpenAI Agents SDK automatically uses the Azure OpenAI or APIM client you've configured, making it seamless to switch between different Azure environments or configurations. Implementing Tracing with Azure OpenAI The OpenAI Agents SDK includes powerful tracing capabilities that can help you monitor and debug your agents. When using Azure OpenAI or APIM, you can implement two types of tracing: 1. Console Tracing for Development from agents.tracing.processors import ConsoleSpanExporter, BatchTraceProcessor from agents.tracing import set_default_trace_processor # Set up console tracing console_exporter = ConsoleSpanExporter() console_processor = BatchTraceProcessor(exporter=console_exporter) set_default_trace_processor(console_processor) 2. OpenAI Tracing for Production Monitoring from agents.tracing.processors import OpenAITracingExporter, BatchTraceProcessor from agents.tracing import set_default_trace_processor import os # Set up OpenAI tracing openai_exporter = OpenAITracingExporter(api_key=os.getenv("OPENAI_TRACING_API_KEY")) openai_processor = BatchTraceProcessor(exporter=openai_exporter) set_default_trace_processor(openai_processor) Tracing is particularly valuable when working with Azure deployments, as it helps you monitor usage, performance, and behavior across different environments. Running Agents with Azure OpenAI To run your agents with Azure OpenAI or APIM, use the Runner class from the OpenAI Agents SDK: from agents import Runner import asyncio async def main(): # Run the banking assistant result = await Runner.run( banking_assistant, input="Hi, I'd like to check my account balance." ) print(f"Response: {result.response.content}") if __name__ == "__main__": asyncio.run(main()) Practical Example: Banking Agents System Let's look at how we can use Azure OpenAI or APIM with the OpenAI Agents SDK to create a banking system with specialized agents and handoff capabilities. 1. Define Specialized Banking Agents We'll create several specialized agents: General Banking Assistant: Handles basic inquiries and account information Loan Specialist: Focuses on loan options and payment calculations Investment Specialist: Provides guidance on investment options Customer Service Agent: Routes inquiries to specialists 2. Implement Handoff Between Agents from agents import handoff, HandoffInputData from agents.extensions import handoff_filters # Define a filter for handoff messages def banking_handoff_message_filter(handoff_message_data: HandoffInputData) -> HandoffInputData: # Remove any tool-related messages from the message history handoff_message_data = handoff_filters.remove_all_tools(handoff_message_data) return handoff_message_data # Create customer service agent with handoffs customer_service_agent = Agent( name="Customer Service Agent", instructions="""You are a customer service agent at a bank. Help customers with general inquiries and direct them to specialists when needed. If the customer asks about loans or mortgages, handoff to the Loan Specialist. If the customer asks about investments or portfolio management, handoff to the Investment Specialist.""", handoffs=[ handoff(loan_specialist_agent, input_filter=banking_handoff_message_filter), handoff(investment_specialist_agent, input_filter=banking_handoff_message_filter), ], tools=[check_account_balance], ) 3. Trace the Conversation Flow from agents import trace async def main(): # Trace the entire run as a single workflow with trace(workflow_name="Banking Assistant Demo"): # Run the customer service agent result = await Runner.run( customer_service_agent, input="I'm interested in taking out a mortgage loan. Can you help me understand my options?" ) print(f"Response: {result.response.content}") if __name__ == "__main__": asyncio.run(main()) Benefits of Using Azure OpenAI/APIM with the OpenAI Agents SDK Integrating Azure OpenAI or APIM with the OpenAI Agents SDK offers several advantages: Enterprise-Grade Security: Azure provides robust security features, compliance certifications, and private networking options Scalability: Azure's infrastructure can handle high-volume production workloads Monitoring and Management: APIM provides additional monitoring, throttling, and API management capabilities Regional Deployment: Azure allows you to deploy models in specific regions to meet data residency requirements Cost Management: Azure provides detailed usage tracking and cost management tools Conclusion The OpenAI Agents SDK combined with Azure OpenAI Service or Azure API Management provides a powerful foundation for building intelligent, specialized AI assistants. By leveraging Azure's enterprise features and the OpenAI Agents SDK's capabilities, you can create robust, scalable, and secure AI applications for production environments. Whether you choose direct Azure OpenAI integration or Azure API Management depends on your specific needs for API management, security, and monitoring. Both approaches work seamlessly with the OpenAI Agents SDK, making it easy to build sophisticated agent-based applications. Azure OpenAI Service Azure APIM OpenAI Agents SDK AI Development Enterprise AIView the full article
  3. Generative AI is becoming increasingly prevalent in healthcare, and its significance is continuing to grow. Given the documentation-intensive nature of healthcare, generative AI presents an excellent opportunity to help alleviate this burden. However, to truly offset the clinician workload, it is crucial that content is checked for reliability and consistency before it is validated by a human. We are pleased to announce the private preview of our clinical conflict detection safeguard, available through our healthcare agent service. This safeguard helps users identify potential clinical conflicts within documentation content, regardless of whether it was generated by a human or AI. Identifying Clinical Conflicts: Seven Detected Categories Every conflict identified by the clinical conflict detection safeguard will indicate the conflict type and reference document content that constitutes the conflict so that the healthcare provider user can validate and take appropriate actions. Opposition conflicts: Normal vs abnormal findings of the same body structure E.g. Left breast: Unremarkable <> The left breast demonstrates persistent circumscribed masses. Negative vs positive statements about the same clinical entity E.g. No cardiopulmonary disease <> Bibasilar atelectasis Lab/vital sign interpretation vs condition E.g. Low blood sugar level at admission <> Patient was admitted with hyperglycemia Opposite disorders/symptoms E.g. Hypernatremia <> Hyponatremia Sex information opposites E.g. Female patient comes in with ... <> Testis: Unremarkable Anatomical conflicts: Absent vs present body structures E.g. Cholelithiasis <> The gallbladder is absent History of removal procedure vs. present body structure E.g. Bilat Mastectomy (2010) <> Left breast: solid mass Conducted imaging study versus clinical finding of body structure E.g. Procedure: Chest XR <> Brain lesion Laterality mismatch of same clinical finding E.g. Results: Stable ductal carcinoma of left breast. <> A&P: Stage 0 stable ductal carcinoma of right breast. Value Conflicts: Condition vs. lab / vital sign / measurement E.g. Hypoglycemia <> Blood Gluc 145 Conflicting lab measurement on same timestamp E.g. 02/11/2022 WBC-8.0 <> 02/11/2022 WBC-5.5 Contraindication conflicts: Medication/substance allergy vs. prescribed medication E.g. He is allergic to acetaminophen. <> Home medication include Tylenol, ... Comparison conflicts: Increased/decreased statements vs. opposite measurements E.g. Ultrasound shows a 3 cm lesion in the bladder wall, previously 4 cm, an increase in size. Descriptive conflict: Positive vs unlikely statements of same condition E.g. Lungs: Pleural effusion is unlikely <> Assessment: Pleural effusion Conflicting characteristics of same condition E.g. Results: Stable small pleural effusion <> Impression: Small pleural effusion Multiple versus Single statement of same condition E.g. Findings: 9 mm lesion of upper pole right kidney <> Assessment: Right renal lesions Metadata conflicts: Age information in provided metadata vs documentation E.g. Date of Birth = “04-08-1990” Date of Service=”11-25-2024" <> A 42-year-old female presents for evaluation of pneumonia. Sex information in provided metadata vs documentation * E.g. Date of Service=”11-25-2024" Sex= “female” <> Finding: Prostate is enlarged A closer look Consider the following radiology report snippet: Exam: CT of the abdomen and pelvis Clinical history: LLQ pain x 10 days, cholecystectomy 6 weeks ago Findings: - New calcified densities are seen in the nondistended gallbladder. - Heterogeneous enhancement of the liver with periportal edema. No suspicious hepatic masses are identified. Portal veins are patent. - Gastrointestinal Tract: No abnormal dilation or wall thickening. Diverticulosis. - Kidneys are normal in size. The patient comes in post cholecystectomy for a CT of abdomen/pelvis. We can create a simple request to the clinical conflict detection safeguards like this: { "input_document":{ "document_id": "1", "document_text": "Exam: CT of the abdomen and pelvis\nClinical history: LLQ pain x 10 days, cholecystectomy 6 weeks ago\nFindings:\n- New calcified densities are seen in the nondistended gallbladder.\n- Heterogeneous enhancement of the liver with periportal edema. No suspicious hepatic masses are identified. Portal veins are patent.\n- Gastrointestinal Tract: No abnormal dilation or wall thickening. Diverticulosis.\n- Kidneys are normal in size.", "document_metadata":{ "document_type":"CLINICAL_REPORT", "date_of_service": "2024-10-10", "locale": "en-us" } }, "patient_metadata":{ "date_of_birth": "1944-01-01", "date_of_admission": "2024-10-10", "biological_sex": "FEMALE", "patient_id": "3" }, "request_id": "1" } The request provides the metadata for document text to allow for potential metadata conflict detections. The clinical conflict detection safeguard considers the document text together with the metadata and returns the following response: { "inferences": [ { "type": "ANATOMICAL_CONFLICT", "confidence_score": 1, "output_token": { "offsets": [ { "document_id": "1", "begin": 73, "end": 88 } ] }, "reference_token": { "offsets": [ { "document_id": "1", "begin": 153, "end": 165 }, { "document_id": "1", "begin": 166, "end": 177 } ] } } ], "status": "SUCCESS", "model_version": "1" } The safeguard picks up an anatomical conflict in the document text and provides text references using the offsets that make up the clinical conflict. In this case, it picks up an anatomical conflict between “cholecystectomy” (which means a gallbladder removal) and the finding of “New calcified densities are seen in the nondistended gallbladder”. The new densities in the gallbladder conflict with the statement that the gallbladder was removed 6 weeks prior. In practice The clinical conflicts detected by the safeguard can be leveraged in various stages of any report generation solution to build trust in its clinical consistency. Imagine a report generation application calling the clinical conflict detection safeguards to highlight potential inconsistencies to the HCP end user — as illustrated below — for review before signing off on the report. There are multiple conflicts in the example above, but the highlight shows inconsistently generated documentation. The normal statement about the lungs contradicts “small nodules in the left lung” findings, so the “Lungs are unremarkable” statement should have been removed. How to use ​ To use the clinical safeguards API, users must provision a healthcare agent service resource in your Azure subscription.​ When creating the healthcare agent service, make sure to set the plan to “Agent (C1)”.​ Once created, please fill out the form here. * This clinical safeguard does not define criteria for determining or identifying biological sex. Sex mismatch is based on the information in the metadata and the medical note. Please remember that neither Clinical Conflict Detection nor Health Agent Service are made available, designed, intended or licensed to be used (1) as a medical device, (2) in the diagnosis, cure, mitigation, monitoring, treatment or prevention of a disease, condition or illness or as a substitute for professional medical advice. The use of these products are subject to the Microsoft Product Terms and other licensing agreements and to the Medical Device Disclaimer and documentation available here. View the full article
  4. Introduction This is the second post for RAG Time, a 7-part educational series on retrieval-augmented generation (RAG). Read the first post of this series and access all videos and resources in our Github repo. Journey 2 covers indexing and retrieval techniques for RAG: Data ingestion approaches: use Azure AI Search to upload, extract, and process documents using Azure Blob Storage, Document Intelligence, and integrated vectorization. Keyword and vector search: compare traditional keyword matching with vector search Hybrid search: how to apply keyword and vector search techniques with Reciprocal Rank Fusion (RRF) for better quality results across more use cases. Semantic ranker and query rewriting: See how reordering results using semantic scoring and enhancing queries through rewriting can dramatically improve relevance. Data Pipeline What is data ingestion? When building a RAG framework, the first step is getting your data into the retrieval system and processed so that it’s primed for the LLM to understand. The following sections cover the fundamentals of data ingestion. A future RAG Time post will cover more advanced topics in data ingestion. Integrated Vectorization Azure AI Search offers integrated vectorization, a built-in feature. It automatically converts your ingested text (or even images) into vectors by leveraging advanced models like OpenAI’s text-embedding-3-large—or even custom models you might have. This real-time transformation means that every document and every segment of it is instantly prepared for semantic analysis, with the entire process seamlessly tied into your ingestion pipeline. No manual intervention is required, which means fewer bottlenecks and a more streamlined workflow. Parsing documents The first step of the data ingestion process involves uploading your documents from various sources—whether that’s Azure Blob Storage, Azure Data Lake Storage Gen2 or OneLake. Once the data is in the cloud, services such as Azure Document Intelligence and Azure Content Understanding step in to extract all the useful information: text, tables, structural details, and even images embedded in your PDFs, Office documents, JSON files, and more. In addition, Azure AI Search automatically supports change tracking so you can rest assured your documents remain up to date without any extra effort. Chunking Documents A critical component in integrated vectorization is chunking. Most language models have a limited context window, which means feeding in too much unstructured text can dilute the quality of your results. By splitting larger documents into smaller, manageable chunks based on sentence boundaries or token counts—while intelligently allowing overlaps to preserve context—you ensure that key details aren’t lost. Overlapping can be especially important for maintaining the continuity of thought, such as preserving table headers or the transition between paragraphs, which in turn boosts retrieval accuracy and improves overall performance. Using integrated vectorization, you lay a solid foundation for a highly effective RAG system that not only understands your data but leverages it to deliver precise, context-rich search results Retrieval Strategies Here are some common, foundational search strategies used in retrieval systems. Keyword Search Traditional keyword search is the foundation of many search systems. This method works by creating an inverted index—a mapping of each term in a document to the documents where it appears. For instance, imagine you have a collection of documents about fruits. A simple keyword search might count the occurrences of words like “apple,” “orange,” or “banana” to determine the relevance of each document. This approach is particularly effective when you need literal matches, such as pinpointing a flight number or a specific code where precision is crucial. Even as newer search technologies emerge, keyword search remains a robust baseline. It efficiently matches exact terms found in text, ensuring that when specific information is needed, the results are both fast and accurate. Vector Search While keyword search provides exact matches, it may not capture the full context or nuanced meanings behind a query. This is where vector search shines. In vector search, both queries and document chunks are transformed into high-dimensional embeddings using advanced models like OpenAI’s text-embedding-3-large. These embeddings capture the semantic essence of words and phrases in multi-dimensional vectors. Once everything is converted into vectors, the system performs a k-nearest neighbor search using cosine similarity. This method allows the search engine to find documents that are contextually similar—even if they don’t share exact keywords. For example, demo code in our system showed that a query like “what is Contoso?” not only returned literal matches but also contextually related documents, demonstrating a deep semantic understanding of the subject. In summary, combining keyword search with vector search in your RAG system leverages the precision of text-based matching with the nuanced insight of semantic search. This dual approach ensures that users receive both exact answers and optionally related information that enhances the overall retrieval experience. Hybrid Search Hybrid search is a powerful method that blends the precision of keyword search with the nuanced, context-aware capabilities of vector search. Hybrid search leverages the strengths of both strategies. On one hand, keyword search excels at delivering exact matches, which is critical when you're looking for precise information like flight numbers, product codes, or specific numerical data. On the other hand, vector search digs deeper by transforming your queries and documents into embeddings, allowing the system to understand and interpret the underlying semantics of the content. By combining these two, hybrid search ensures that both literal and contextually similar results are retrieved. Reciprocal Rank Fusion (RRF) is a technique used to merge the results from both keyword and vector searches into one cohesive set. Essentially, it reorders and integrates the result lists from each method, amplifying the highest quality matches from both sides. The outcome is a ranked list where the most relevant document chunks are prioritized. By incorporating hybrid search into your retrieval system, you get the best of both worlds: the precision of keyword matching alongside the semantic depth of vector search, all working together to deliver an optimal search experience. Reranking Reranking is a post-retrieval step. Reranking uses a reasoning model to sort and prioritize the most relevant retrieved documents first. Semantic ranker in Azure AI Search uses a cross-encoder model to re-score every document retrieved on a normalized scale from 0 to 4. This score reflects how well the document semantically matches the query. You can use this score to establish a minimum threshold to filter out low-quality or “noisy” documents, ensuring that only the best passages are sent along for further processing. This re-ranking model is trained on data commonly seen in RAG applications, across multiple industries, languages and data types. Query transformations Sometimes, a user’s original query might be imprecise or too narrow, which can lead to relevant content being missed. Pre-retrieval, you can transform, augment or modify the search query to improve recall. Query rewriting in Azure AI Search is a pre-retrieval feature that transforms the initial search query into alternative expressions. For example, a question like "What underwater activities can I do in the Bahamas?" might be rephrased as "water sports available in the Bahamas" or "snorkeling and diving in the Bahamas." This expansion creates additional candidate queries that help surface documents that may have been overlooked by the original wording. By optimizing across the entire query pipeline, not just the retrieval phase, you have more tools to deliver more relevant information to the language model. Azure AI Search makes it possible to fine-tune the retrieval process, filtering out noise and capturing a wider range of relevant content—even when the initial query isn’t perfect. Continue your RAG Journey: Wrapping Up & Looking Ahead Let’s take a moment to recap the journey you’ve embarked on today. We started with the fundamentals of data ingestion, where you learned how to use integrated vectorization to extract valuable information. Next, we moved into search strategies by comparing keyword search—which offers structured, literal matching ideal for precise codes or flight details—with the more dynamic vector search that captures the subtle nuances of language through semantic matching. Combining these methods with hybrid search, and using Reciprocal Rank Fusion to merge results, provided a balanced approach: the best of both worlds in one robust retrieval system. To further refine your results, we looked at the semantic ranker—a tool that re-scores and reorders documents based on their semantic fit with your query—and query rewriting, which transforms your original search ideas into alternative formulations to catch every potential match. These enhancements ensure that your overall pipeline isn’t just comprehensive; it’s designed to deliver only the most relevant, high-quality content. Now that you’ve seen how each component of this pipeline works together to create a state-of-the-art RAG system, it’s time to take the next step in your journey. Explore our repository for full code samples and detailed documentation. And don’t miss out on future RAG Time sessions, where we continue to share the latest best practices and innovations in retrieval augmented generation. Getting started with RAG on Azure AI Search has never been simpler, and your journey toward building even more effective retrieval systems is just beginning. Embrace the next chapter and continue to innovate! Next Steps Ready to explore further? Check out these resources, which can all be found in our centralized GitHub repo: Watch Journey 2 RAG Time GitHub Repo (Hands-on notebooks, documentation, and detailed guides to kick-start your RAG journey) Azure AI Search Documentation Azure AI Foundry Have questions, thoughts, or want to share how you’re using RAG in your projects? Drop us a comment below or open a discussion in our GitHub repo. Your feedback shapes our future content! View the full article
  5. In today’s digital landscape, SaaS and OAuth applications have revolutionized the way we work, collaborate, and innovate. However, they also introduce significant risks related to security, privacy and compliance. As the SaaS landscape grows, IT leaders must balance enabling productivity with managing risk. A key to managing risk is automated tools that provide real-time context and remediation capabilities to help Security Operations Center (SOC) teams outpace sophisticated attackers and limit lateral movement and damage. The Rise of OAuth App Attacks Over the past two years, there has been a significant increase in OAuth app attacks. Employees often create app-to-app connections without considering security risks. With just one click granting permissions, new apps can read and write emails, set rules, and gain authorization to perform nearly any action. These overprivileged apps are more at risk for compromise, and Microsoft internal research shows that 1 in 3 OAuth apps are overprivileged. 1 A common attack involves using phishing to compromise a user account, then creating a malicious OAuth app with elevated privileges or hijacking an existing OAuth app and manipulating it for malicious use. Once threat actors gain persistence in the environment, they can also deploy virtual machines or run spam campaigns resulting in data breaches, financial and reputational losses. Automatic Attack Disruption Microsoft’s Automatic attack disruption capabilities disrupt sophisticated in-progress attacks and prevent them from spreading, now including OAuth app-based attacks. Attack disruption is an automated response capability that stops in-progress attacks by analyzing the attacker’s intent, identifying compromised assets, and containing them in real time. This built-in, self-defense capability uses the correlated signals in XDR, the latest threat intelligence, and AI and machine learning backed models to accurately predict the attack path used and block an attacker’s next move before it happens with above 99% confidence. This includes response actions such as containing devices, disabling user accounts, or disabling malicious OAuth apps. The benefits of attack disruption include: Speed of response: attack disruption can disrupt attacks like ransomware in an average time of 3 minutes Reduced Impact of Attacks: by minimizing the time attackers have to cause damage, attack disruption limits the lateral movement of threat actors within your network, reducing the overall impact of the threat. This means less downtime, fewer compromised systems, and lower recovery costs. Enhanced Security Operations: attack disruption allows security operations teams to focus on investigating and remediating other potential threats, improving their efficiency and overall effectiveness. Real-World Attacks Microsoft Threat Intelligence has noted a significant increase in OAuth app attacks over the past two years. In most cases a compromised user provides the attacker initial access, while the malicious activities and persistence are carried out using OAuth applications. Here’s a real-world example of an OAuth phishing campaign that we’ve seen across many customers’ environments. Previous methods to resolve this type of attack would have taken hours for SOC teams to manually hunt and resolve. Initial Access: A user received an email that looks legitimate but contains a phishing link that redirects to an adversary-in-the-middle (AiTM) phishing kit. Figure 1. An example of an AiTM controlled proxy that impersonates a login page to steal credentials. Credential Access: When the user clicks on that link, they are redirected to an AiTM controlled proxy that impersonates a login page to steal the user credentials and an access token which grants the attacker the ability to create or modify OAuth apps. Persistence and Defense Evasion: The attacker created multiple ma malicious OAuth apps across various tenants which grants read and write access to the user’s e-mail, files and other resources. Next the attacker created an inbox forwarding rule to exfiltrate emails. An additional rule was created to empty the sent box, thus deleting any evidence that the user was compromised. Most organizations are completely blind-sighted when this happens. Automatic Attack Disruption: Defender XDR gains insights from many different sources including endpoints, identities, email, collaboration tools, and SaaS apps and correlates the signals into a single, high-confidence incident. In this attack, XDR identifies assets controlled by the attacker and it automatically takes response actions across relevant Microsoft Defender products disable affected assets and stop the attack in real-time. SOC Remediation: After the risk is mitigated, Microsoft Defender admins can manually unlock the users that had been automatically locked by the attack disruption response. The ability to manually unlock users is available from the Microsoft Defender action center, and only for users that were locked by attack disruption. Figure 2. Timeline to disrupt an OAuth attack comparing manual intervention vs. automatic attack disruption. Enhanced Security with Microsoft Defender for Cloud Apps Microsoft Defender for Cloud Apps enables the necessary integration and monitoring capabilities required to detect and disrupt malicious OAuth applications. To ensure SOC teams have full control, they can configure automatic attack disruption and easily revert any action from the security portal. Figure 3. An example of a contained malicious OAuth application, with attack disruption tag Conclusion Microsoft Defender XDR's automatic disruption capability leverages AI and machine learning for real-time threat mitigation and enhanced security operations. Want to learn more about how Defender for Cloud Apps can help you manage OAuth attacks and SaaS-based threats? Dive into our resources for a deeper conversation. Get started now. Get started Make sure your organization fulfils the Microsoft Defender pre-requisites (Mandatory). Connect “Microsoft 365 connector” in Microsoft Defender for Cloud Apps (Mandatory).  Check out our documentation to learn more about Microsoft 365 Defender attack disruption prerequisites, available controls, and indications. Learn more about other scenarios supported by automatic attack disruption Not a customer, yet? Start a free trial today. 1Microsoft Internal Research, May 2024, N=502 View the full article
  6. Dear Microsoft 365 Developer Team, I would like to submit a feature request regarding custom menus in Word JavaScript Add-ins. Currently, when defining custom menus for the ribbon via the manifest.xml, it is possible to create a root-level menu control with a list of menu items. However, submenus (nested menus) are not supported. This limits the ability to create well-structured and user-friendly menus, especially when dealing with more complex add-ins that require logical grouping of actions. Use Case Example: 
Imagine an add-in that handles document templates, formatting options, and insertion of custom content. It would be much more intuitive to organize these into hierarchical menus like: My Add-in Menu | |---Templates | |---Contract Template | |---NDA Template |---Formatting | |---Apply Header | |---Apply Footer |---Insert |---Clause |---Placeholder Currently, to achieve something like this, we either have to create long flat menus, which are less user-friendly and harder to navigate, or define multiple root-level menu controls as a workaround. However, having too many root-level menus clutters the ribbon and makes the overall user experience confusing and less efficient. Feature Request:
 Please consider adding support for nested menu structures (submenus) in Office Add-in command definitions. This would: Greatly improve user experience for complex add-ins.Allow better organization of actions and commands.Align the Add-in UX closer to the native ribbon and menu experiences in Office apps.Possible Implementation Suggestions: Extend the Menu control to allow nested Menu or MenuItem elements.Allow referencing predefined menus to enable reuse and modularity.Related Documentation: Office Add-ins XML manifestAdd-In Commands OverviewControl Element of Type MenuThank you for considering this enhancement. It would be a huge step forward for creating more powerful and user-friendly Office Add-ins! Best regards, 
Ingo View the full article
  7. This blog series is designed to help you skill up on Microsoft 365 Copilot.. We hope you will make this your go-to source for the latest updates, resources, and opportunities in technical skill building for Microsoft 365 Copilot. New Microsoft 365 Copilot training for business users: On-demand training introducing business users to Copilot. Great for users new to Copilot! Work smarter with AI - Training | Microsoft Learn Get more done and unleash your creativity with Microsoft Copilot. In this learning path, you'll explore how to use Microsoft Copilot or Microsoft 365 Copilot to help you research, find information, and generate effective content. On-demand training for business users looking to improve productivity in the apps they use every day: Draft, analyze, and present with Microsoft 365 Copilot - Training | Microsoft Learn This Learning Path directs users to learn common prompt flows in Microsoft 365 apps including PowerPoint, Word, Excel, Teams, and Outlook. It also introduces Microsoft 365 Copilot Chat and discusses the difference between work and web grounded data. On-demand training for business users that want to get started with AI-powered agents: Transform your everyday business processes with no-code agents – Training | Microsoft Learn This Learning Path examines no-code agents in Microsoft 365 Copilot Chat and SharePoint and explores how business users can create, manage, and use agents as their own AI-powered assistant. New resources to accelerate your journey with Microsoft 365 Copilot Chat and agents to transform business processes: Copilot Chat and agent starter kit To support the announcement of Microsoft 365 Copilot Chat, we have updated the Copilot Success Kit and Copilot Success Kit for small and medium-sized businesses, which now includes a new agent starter kit with guidance and easy ways for your organization to get started with Copilot Chat and agentic functionality. You can find the latest assets and resources here to start your journey. The Copilot Chat and Agent Starter Kit has a comprehensive set of guidance for both IT and end-users. Agent overview guide Learn how to quickly unlock the value of Copilot through agents. See the easiest ways to get started across Copilot Chat, SharePoint, and Microsoft Copilot Studio with lots of examples and templates that will help you quickly build and use your first agents. IT Set up and Controls Guide Get the latest on IT set up and controls guidance for Copilot Chat and agents. Manage access to Copilot Chat for your users and set up required data governance controls. Then set up access to agents including licensing and billing plans. Learn how to monitor and manage consumption. Latest Agent Blogs Catch up on the latest announcement from Satya Nadella and Jared Spataro announcing Copilot Chat here Understand how pricing for Microsoft 365 Copilot Chat will work and what new capabilities we are announcing in Copilot Studio, to support it here Copilot Chat and agents user resources Share the Copilot Chat user training deck with users at your organization to introduce Copilot Chat and guide them on how to use it effectively For dedicated guidance on using and creating agents, share the Agents in Copilot Chat handout. Copilot Chat scenarios We have launched new Copilot Chat (free) and agent (consumption) scenarios to the Scenario Library, with easy steps for each of your functional teams to get started. Microsoft 365 Copilot AMA event for IT administrators (recap) On-demand AMA on tools and techniques for preparing your data for Copilot Prepare your data for Copilot: Essential tools and techniques Learn how to address oversharing, integrate SharePoint Advanced Management, and utilize Microsoft Purview for secure and compliant data handling. Get practical guidance to ensure your data is ready for Copilot deployment, including insights from our Microsoft 365 Copilot deployment blueprint On-demand AMA on how data flows through Microsoft 365 Copilot Follow the prompt: How data flows through Microsoft 365 Copilot Explore how Microsoft processes and protects your data with Microsoft 365 Copilot. Focus on enterprise data protection, responsible AI services, and orchestration in managing prompts. Learn about tools to prevent data loss and oversharing, and how Microsoft Graph Connectors and agents integrate external data sources to enhance Copilot skills and knowledge. Join us at the Microsoft 365 Community Conference in Las Vegas, May 6-8 The Microsoft 365 Community Conference is your chance to keep up with AI, build game-changing skills, and take your career (and business) even further. With over 200 sessions, workshops, keynotes, and AMAs, you’ll learn directly from the experts and product-makers who are reimagining what’s possible in the workplace. Here’s what you can expect: Meet one-on-one with the people who create Microsoft products—ask questions, share feedback, and discover real-world solutions Explore Microsoft’s latest product updates and learn about what’s on the horizon Build and sharpen skills you can use immediately to be more productive, creative, and collaborative with the Microsoft tools you use every day Grow your network, dive deep, and have fun with the best community in tech How to Register: Buy tickets today and get ready to transform the way you work. Save $150 with our exclusive customer code SAVE150. View the full article
  8. What is Network Security Perimeter? The Network Security Perimeter is a feature designed to enhance the security of Azure PaaS resources by creating a logical network isolation boundary. This allows Azure PaaS resources to communicate within an explicit trusted boundary, ensuring that external access is limited based on network controls defined across all Private Link Resources within the perimeter. Azure Monitor - Network Security Perimeter - Public Cloud Region - Update We are pleased to announce the expansion of Network Security Perimeter features in Azure Monitor services from 6 to 56 Azure regions. This significant milestone enables us to reach a broader audience and serve a larger customer base. It underscores our continuous growth and dedication to meeting the security needs of our global customers. The Network Security Perimeter feature, now available in these additional regions, is designed to enhance the security and monitoring capabilities of our customers' networks. By utilizing our solution, customers can achieve a more secure and isolated network environment, which is crucial in today's dynamic threat landscape. Currently, NSP is in Public Preview with Azure Global customers and e have expanded Azure Monitor region support for NSP from 6 regions to 56 regions. The region rollout has enabled our customers to meet their network isolation and monitoring requirements for implementing the Secure Future Initiative (SFI) security waves. Azure Monitor - Network Security Perimeter Configuration Key Benefits to Azure Customers The Network Security Perimeter (NSP) provides several key benefits for securing and managing Azure PaaS resources: Enhances security by allowing communication within a trusted boundary and limiting external access based on network controls. Provides centralized management, enabling administrators to define network boundaries and configure access controls through a uniform API in Azure Core Network. Offers granular access control with inbound and outbound rules based on IP addresses, subscriptions, or domain names. Includes logging and monitoring capabilities for visibility into traffic patterns, aiding in auditing, compliance, and threat identification. Integrates seamlessly with other Azure services and supports complex network setups by associating multiple Private Link Resources with a single perimeter. These characteristics highlight NSP as an excellent instrument for enhancing network security and ensuring data integrity based on the network isolation configuration. Have a Question / Any Feedback? Reach us at AzMon-NSP-Scrum@microsoft.com View the full article
  9. Hi, Insiders! Considering writing your first novel or a children’s book? Microsoft Copilot can help you get started or get unstuck during the story development phase of the project, or help you pick the perfect title for your masterpiece. If you need help with the characters Let’s say you have an idea for a story and main character, but are struggling to decide on its name, background, or personality. Share what you’re thinking of with Copilot and ask for some suggestions! Sample prompt I’m writing a children’s book about a pencil living among pens and learning how to fit in while also embracing its uniqueness. Can you come up with 2-3 relatable name ideas for the main character pencil? Also, generate 2-3 punny ideas for the name of the pen town. Copilot’s response If you need help with the plot Maybe your character is clear, but you’re not sure how to drive the plot forward. Or, maybe you’re staring at a blank page and need some help simply “putting pen to paper.” Copilot can take even the roughest idea and give you some helpful suggestions for turning it into something that will spur you on. Sample prompt I want to create a story for adults about marriage in your late 60s. I want it to feel realistic and give useful advice. The story is about fictional characters Gillian and Robert, who met on a dating app after their children told them to get back out there. Gillian’s husband passed away a few months prior, and Robert is divorced from his high school sweetheart. Can you suggest 1-3 plot points the book could cover that relate to their situation and what someone in their 60s might encounter on a dating app or in the dating scene? Copilot’s response If you need help with a copy issue So your characters and plot are clear – fantastic! Copilot can still be of assistance when you’re struggling to put into words a quote, scene, phrase, or paragraph. Give it your rough draft and see how it tweaks and refines it. Sample prompt I’m writing a scene for a short personal essay about when I visited the Grand Canyon for the first time. I wasn’t just struck by its beauty, but it made me almost terrified of how insignificant we can be in the grand scheme of life. I mentioned this to my father, whom I was traveling with, and he reminded me of how we all make small impacts on the world every second of every day. Can you write a short dialog to showcase this conversation? Copilot’s response Tips and tricks As you draft your own prompts throughout your book ideation and writing process, keep these tips in mind to make Copilot’s responses as effective as possible: Be specific: Instead of asking, “Give me some nonfiction book ideas,” you could ask, “What are 3-5 book ideas for a story for teenagers about entering high school?” Provide context: Copilot can tailor its responses to the type of writing or style you want to emulate: “Give me 2-3 plot points for a novel about skiing that’s both serious about the sport and lighthearted in tone.” Ask clear questions: Instead of a broad question like, “What should I write about?” try, “What are some long-form essays I could write as a 23-year-old single man living in Europe?” Break down complex requests: If you have a multi-part question, break it into smaller parts: “First, can you provide a title and outline for a cookbook about cooking with children? Then, suggest 3-5 recipes I should include.” Specify desired format: If you need a list, summary, or detailed explanation, mention that: “Can you provide a list of 5 books or articles I should read if I want to write my own book of poems?” Indicate your preferences: Let Copilot know if you have a preference for the type of information or tone. “Can you write a dialog between a worm and an apple that’s funny and uses Gen Z lingo?” Provide examples: If you’re looking for creative ideas, give an example of what you like. “I need a story idea inspired by ‘Harold and the Purple Crayon.’” Ask follow-up questions: If Copilot’s initial response isn’t quite what you need, ask a follow-up question to narrow down the information: “Can you give more details on the side character Bill who lives in a teapot?” Be patient and iterative: Sometimes it takes a few tries to get the perfect response. Feel free to refine your prompt based on the initial answers you receive. We can’t wait to read what you come up with! Learn about the Microsoft 365 Insider program and sign up for the Microsoft 365 Insider newsletter to get the latest information about Insider features in your inbox once a month! View the full article
  10. Azure Kubernetes Service (AKS) now offers free platform metrics for monitoring your control plane components. This enhancement provides essential insights into the availability and performance of managed control plane components, such as the API server and etcd. In this blog post, we'll explore these new metrics and demonstrate how to leverage them to ensure the health and performance of your AKS clusters. What's New? Previously, detailed control plane metrics were only available through the paid Azure Managed Prometheus feature. Now, these metrics are automatically collected for free for all AKS clusters and are available for creating metric alerts. This democratizes access to critical monitoring data and helps all AKS users maintain more reliable Kubernetes environments. Available Control Plane Metrics The following platform metrics are now available for your AKS clusters: NameDisplay NameDescriptionapiserver_memory_usage_percentageAPI Server (PREVIEW) Memory Usage PercentageMaximum memory percentage (based off current limit) used by API server pod across instancesapiserver_cpu_usage_percentageAPI Server (PREVIEW) CPU Usage PercentageMaximum CPU percentage (based off current limit) used by API server pod across instancesetcd_memory_usage_percentageETCD (PREVIEW) Memory Usage PercentageMaximum memory percentage (based off current limit) used by ETCD pod across instancesetcd_cpu_usage_percentageETCD (PREVIEW) CPU Usage PercentageMaximum ETCD percentage (based off current limit) used by ETCD pod across instancesetcd_database_usage_percentageETCD (PREVIEW) Database Usage PercentageMaximum utilization of the ETCD database across instances Accessing the New Platform Metrics The metrics are automatically collected and available in the Azure Monitor Metrics explorer. Here's how to access them: Navigate to your AKS cluster in the Azure portal Select "Metrics" from the monitoring section In the Metric namespace dropdown, choose the Metric Namespace as Container Service and Metric as any of the metrics mentioned above e.g. API Server Memory Utilization. You can also choose your desired aggregation (between Avg or Max) and timeframe. You'll now see the control plane metrics available for selection: These metrics can also be retrieved through the platform metrics API or exported to other destinations. Understanding Key Control Plane Metrics API Server Memory Usage Percentage The API server is the front-end for the Kubernetes control plane, processing all requests to the cluster. Monitoring its memory usage is critical because: High memory usage can lead to API server instability and potential outages Memory pressure may cause request latency or timeouts Sustained high memory usage indicates potential scaling issues A healthy API server typically maintains memory usage below 80%. Values consistently above this threshold warrant investigation and potential remediation. To investigate further into the issue, follow the guide here. etcd Database Usage Percentage etcd serves as the persistent storage for all Kubernetes cluster data. The etcd_database_usage_percentage metric is particularly important because: etcd performance dramatically degrades as database usage approaches capacity High database utilization can lead to increased latency for all cluster operations Database size impacts backup and restore operations Best practices suggest keeping etcd database usage below 2GB (absolute usage) to ensure optimal performance. When usage exceeds this threshold, you can clean up unnecessary resources, reduce watch operations, and implement resource quota and limits. The Diagnose and Solve experience in Azure Portal has detailed insights on the cause of the etcd database saturation. To investigate this issue further, follow the guide here. Setting Up Alerts for Control Plane Metrics To proactively monitor your control plane, you can set up metric alerts: Navigate to your AKS cluster in the Azure Portal Select "Alerts" from the monitoring section Click on "Create" and select "Alert Rule" Select your subscription, resource group, and resource type "Kubernetes service" in the Scope (selected by default) and click on See all signals in Conditions Configure signal logic: Select one of the control plane metrics (e.g., "API Server Memory Usage Percentage") Set the condition (e.g., "Greater than") Define the threshold (e.g., 80%) Specify the evaluation frequency and window Define actions to take when the alert triggers Name and save your alert rule Example Alert Configurations API Server Memory Alert: Signal: apiserver_memory_usage_percentage Operator: Greater than Threshold: 80% Window: 5 minutes Frequency: 1 minute Severity: 2 (Warning) ETCD Database Usage Alert: Signal: etcd_database_usage_percentage Operator: Greater than Threshold: 75% Window: 15 minutes Frequency: 5 minutes Severity: 2 (Warning) You can also create alerts through CLI, PowerShell or ARM templates Conclusion The introduction of free Azure platform metrics for AKS control plane components represents a enhancement to the monitoring capabilities available to all AKS users. By leveraging these metrics, particularly API server memory usage and etcd database usage percentages, you can ensure the reliability and performance of your Kubernetes environments without additional cost. Start using these metrics today to gain deeper insights into your AKS clusters and set up proactive alerting to prevent potential issues before they impact your applications. Learn More For more detailed information, refer to the following documentation: Monitor the control plane List of platform metrics in AKS Troubleshoot API server and etcd problems in AKS View the full article
  11. Greetings pilots, and welcome to another pioneering year of AI innovation with Security Copilot. Find out how your organization can reach new heights with Security Copilot through the many exciting announcements on the way at both Microsoft Secure and RSA 2025. This is why now is the time to familiarize yourself and get airborne with Security Copilot. Go to School Microsoft Security Copilot Flight School is a comprehensive series charted to take students through fundamental concepts of AI definitions and architectures, take flight with prompting and automation, and hit supersonic speeds with Logic Apps and custom plugins. By the end of the course, students should be equipped with the requisite knowledge for how to successfully operate Security Copilot to best meet their organizational needs. The series contains 11 episodes with each having a flight time of around 10 minutes. Security Copilot is something I really, really enjoy, whether I’m actively contributing to its improvement or advocating for the platform’s use across security and IT workflows. Ever since I was granted access two years ago – which feels like a millennium in the age of AI – it’s been a passion of mine, and it’s why just recently I officially joined the Security Copilot product team. This series in many ways reflects not only my passion but similar passion found in my marketing colleagues Kathleen Lavallee (Senior Product Marketing Manager, Security Copilot) Shirleyse Haley (Senior Security Skilling Manager), and Shateva Long (Product Manager, Security Copilot). I hope that you enjoy it just as much as we did making it. Go ahead, and put on your favorite noise-cancelling headphones, it’s time, pilots, to take flight. Log Flight Hours There are two options for watching Security Copilot Flight School: either on Microsoft Learn or via the Youtube Playlist found on the Microsoft Security Youtube Channel. The first two episodes focus on establishing core fundamentals of Security Copilot platform design and architecture – or perhaps attaining your instrument rating. The episodes thereafter are plotted differently, around a standard operating procedure. To follow the ideal flight path Security Copilot should be configured and ready to go – head over to MS Learn and the Adoption Hub to get airborne. It’s also recommended that pilots watch the series sequentially, and be prepared to follow along with resources found on Github, to maximize learning and best align with the material. This will mean that you’ll need to coordinate with a pilot with owner permissions for your instance to create and manipulate the necessary resources. Episode 1 - What is Microsoft Security Copilot? Security is complex and requires highly specialized skills to face the challenges of today. Because of this, many of the people working to protect an organization work in silos that can be isolated from other business functions. Further, enterprises are highly fragmented environments with esoteric systems, data, and processes. All of which takes a tremendous amount of time, energy, and effort just to do the day-to-day. Security Copilot is a cloud-based, AI-powered security platform that is designed to address the challenges presented by complex and fragmented enterprise environments by redefining what security is and how security gets done. What is AI, and why exactly should it be used in a cybersecurity context? Episode 2 - AI Orchestration with Microsoft Security Copilot Why is The Paper Clip Pantry a 5-star restaurant renowned the world over for its Wisconsin Butter Burgers? Perhaps it’s how a chef uses a staff with unique skills and orchestrates the sourcing of resources in real time, against specific contexts to complete an order. After watching this episode you’ll understand how AI Orchestration works, why nobody eats a burger with only ketchup, and how the Paper Clip Pantry operates just like the Security Copilot Orchestrator. Episode 3 – Standalone and Embedded Experiences Do you have a friend who eats pizza in an inconceivable way? Maybe they eat a slice crust-first, or dip it into a sauce you never thought compatible with pizza? They work with pizza differently, just like any one security workflow could be different from one task team, or individual to the next. This philosophy is why Security Copilot has two experiences – solutions embedded within products, and a standalone portal – to augment workflows no matter their current state. This episode will begin covering those experiences. Episode 4 – Other Embedded Experiences Turns out you can also insist upon putting cheese inside of pizza crust, or bake it thick enough as to require a fork and knife. I imagine, it’s probably something Windows 95 Man would do. In this episode, the Microsoft Entra, Purview, Intune, and Microsoft Threat Intelligence products showcase how Security Copilot advances their workflows within their portals. Beyond baking in the concepts of many workflows, many operators, the takeaway from this episode is that Security Copilot works with security adjacent workflows – IT, Identity, and DLP. Episode 5 – Manage Your Plugins Like our chef in The Paper Clip Pantry, we should probably define what we want to cook, what chefs to use, and set permissions for those that can interact within any input or output from the kitchen. Find out what plugins add to Security Copilot and how you can set plugin controls for your team and organization. Episode 6 – Prompting Is this an improv lesson, or a baking show? Or maybe if you watch this episode, you’ll learn how Security Copilot handles natural language inputs to provide you meaningful answers know as responses. Episode 7 – Prompt Engineering With the fundamentals of prompting in your flight log, it’s time to soar a bit higher with prompt engineering. In this episode you will learn how to structure prompts in a way to maximize the benefits of Security Copilot and begin building workflows. Congrats, pilot, your burgers will no longer come with just ketchup. Episode 8 – Using Promptbooks What would it look like to find a series of prompts and run them, in the same sequence with the same output every time? You guessed it, a promptbook, a repeatable workflow in the age of AI. See where to access promptbooks within the platform, and claw back some of your day to perfect your next butter burger. Episode 9 – Custom Promptbooks You’ve been tweaking your butter burger recipe for months now. You’ve finally landed at the perfect version by incorporating a secret nacho cheese recipe. The steps are defined, the recipe perfect. How do you repeat it? Just like your butter burger creation, you might discover or design workflows with Security Copilot. With custom promptbooks you can repeat and share them across your organization. In this episode you’ll learn about the different ways Security Copilot helps you develop your own custom AI workflows. Episode 10 – Logic Apps System automation, robot chefs? Actions? What if customers could order butter burgers with the click of a button, and the kitchen staff would automatically make one? Or perhaps every Friday at 2pm a butter burger was just delivered to you? Chances are there are different conditions across your organization that when present requires a workflow to being. With Logic Apps, Security Copilot can be used to automatically aid workflows across any system a Logic App can connect to. More automation, less mouse clicking, that’s a flight plan everyone can agree on. Episode 11 – Extending to Your Ecosystem A famed restaurant critic stopped into the The Paper Clip Pantry butter burger, and it’s now the burger everyone is talking about. Business is booming and it's time to expand the menu – maybe a butter burger pizza, perhaps a doughnut butter burger? But you’ll need some new recipes and sources of knowledge to achieve this. Like a food menu the possibilities of expanding Security Copilot’s capabilities are endless. In this episode learn how this can be achieved with custom plugins and knowledgebases. Once you have that in your log, you will be a certified Ace, and ready to take flight with Security Copilot. Take Flight I really hope that you not only learn something new but have fun taking flight with the Security Copilot Flight School. As with any new and innovative technology, the learning never stops, and there will be opportunities to log more flight hours from our expert flight crews. Stay tuned at the Microsoft Security Copilot video hub, Microsoft Secure, and RSA 2025 for more content in the next few months. If you think it’s time to get the rest of your team and/or organization airborne there’s check out the Security Copilot adoption hub to get started: aka.ms/SecurityCopilotAdoptionHub Other Resources Our teams have been hard at work building solutions to extend Security Copilot, you can find them on our community Github page found at: aka.ms/SecurityCopilotGitHubRepo To stay close to the latest in product news, development, and to interact with our engineering teams, please join the Security Copilot CCP to get the latest information: aka.ms/JoinCCP View the full article
  12. The Future of AI blog series is an evolving collection of posts from the AI Futures team in collaboration with subject matter experts across Microsoft. In this series, we explore tools and technologies that will drive the next generation of AI. Explore more at: https://aka.ms/the-future-of-ai Customizing AI agents with the Semantic Kernel agent framework AI agents are autonomous entities designed to solve complex tasks for humans. Compared to traditional software agents, AI-powered agents allow for more robust solutions with less coding. Individual AI agents have shown significant capabilities, achieving results previously not possible. The potential of these agents is enhanced when multiple specialized agents collaborate within a multi-agent system. Research has shown that such systems, comprising single-purpose agents, are more effective than single multi-purpose agents in many tasks [1]. This enables automation of more complex workflows with improved results and higher efficiency in the future. In this post, we are going to explore how you can build single agents and multi-agent systems with Semantic Kernel. Semantic Kernel is a lightweight and open-source SDK developed by Microsoft, designed to facilitate the creation of production-ready AI solutions. Despite its capabilities, Semantic Kernel remains accessible, allowing developers to start with minimal code. For scalable deployment, it offers advanced features such as telemetry, hooks, and filters to ensure the delivery of secure and responsible AI solutions. The Semantic Kernel Agent Framework offers pro-code orchestration within the Semantic Kernel ecosystem, facilitating the development of AI agents and agentic patterns capable of addressing more complex tasks autonomously. Starting with individual agents is recommended. Semantic Kernel provides a variety of AI service connectors, allowing developers and companies to select models from different providers or even local models. Additionally, Semantic Kernel gives developers the flexibility to integrate their agents created from managed services like Azure OpenAI Service Assistant API and Azure AI Agent Service into a unified system. Refer to the samples in the Semantic Kernel GitHub repository to get you started. Python: semantic-kernel/python/samples/getting_started_with_agents at main · microsoft/semantic-kernel .Net: semantic-kernel/dotnet/samples/GettingStartedWithAgents at main · microsoft/semantic-kernel Previous posts have thoroughly examined the principles of designing single agents and the effectiveness of multi-agent systems. The objective of this post is not to determine when a single agent should be employed versus a multi-agent system; however, it is important to emphasize that agents should be designed with a single purpose to maximize their performance. Assigning multiple responsibilities or capabilities to a single agent is likely to result in suboptimal outcomes. If your tasks can be efficiently accomplished by a single agent, that’s great! If you find that the performance of a single agent is unsatisfactory, you might consider employing multiple agents to collaboratively address your tasks. Our recent Microsoft Mechanics video outlines how a multi-agent system operates. Semantic Kernel offers a highly configurable chat-based agentic pattern, with additional patterns coming soon. It accommodates two or more agents and supports custom strategies to manage the flow of chat, enhancing the system’s dynamism and overall intelligence. Semantic Kernel is production-ready with built-in features that are off by default but available when needed. One such feature is observability. Often in an agentic application, agent interactions were not shown in the output, which is typical since users often focus on results. Nonetheless, being able to inspect the inner process is crucial to developers. Tracking interactions becomes challenging as the number of agents increases and tasks grow complex. Semantic Kernel can optionally emit telemetry data to ease debugging. For a demonstration of three agents collaborating in real-time and reviewing the agent interactions with the tracing UI in Azure AI Foundry portal, please watch the following video demo: The code to the demo can be found in a single demo app in the Semantic Kernel repository: semantic-kernel/python/samples/demos/document_generator at main · microsoft/semantic-kernel In summary, Semantic Kernel offers an efficient framework for both single and multi-agent systems. As the platform evolves, it promises even more innovative patterns and capabilities, solidifying its role in agent-based AI. Whether for simple tasks or complex projects, Semantic Kernel provides the necessary tools to achieve your goals effectively. Happy coding! To get started, Explore Azure AI Foundry models, agentic frameworks, and toolchain features Begin coding using the Semantic Kernel python repository in GitHub Download the Azure AI Foundry SDK Review our Learn documentation View the full article
  13. Demo: Mpesa for Business Setup QA RAG Application In this tutorial we are going to build a Question-Answering RAG Chat Web App. We utilize Node.js and HTML, CSS, JS. We also incorporate Langchain.js + Azure OpenAI + MongoDB Vector Store (MongoDB Search Index). Get a quick look below. Note: Documents and illustrations shared here are for demo purposes only and Microsoft or its products are not part of Mpesa. The content demonstrated here should be used for educational purposes only. Additionally, all views shared here are solely mine. What you will need: An active Azure subscription, get Azure for Student for free or get started with Azure for 12 months free. VS Code Basic knowledge in JavaScript (not a must) Access to Azure OpenAI, click here if you don't have access. Create a MongoDB account (You can also use Azure Cosmos DB vector store) Setting Up the Project In order to build this project, you will have to fork this repository and clone it. GitHub Repository link: https://github.com/tiprock-network/azure-qa-rag-mpesa . Follow the steps highlighted in the README.md to setup the project under Setting Up the Node.js Application. Create Resources that you Need In order to do this, you will need to have Azure CLI or Azure Developer CLI installed in your computer. Go ahead and follow the steps indicated in the README.md to create Azure resources under Azure Resources Set Up with Azure CLI. You might want to use Azure CLI to login in differently use a code. Here's how you can do this. Instead of using az login. You can do az login --use-code-device OR you would prefer using Azure Developer CLI and execute this command instead azd auth login --use-device-code Remember to update the .env file with the values you have used to name Azure OpenAI instance, Azure models and even the API Keys you have obtained while creating your resources. Setting Up MongoDB After accessing you MongoDB account get the URI link to your database and add it to the .env file along with your database name and vector store collection name you specified while creating your indexes for a vector search. Running the Project In order to run this Node.js project you will need to start the project using the following command. npm run dev The Vector Store The vector store used in this project is MongoDB store where the word embeddings were stored in MongoDB. From the embeddings model instance we created on Azure AI Foundry we are able to create embeddings that can be stored in a vector store. The following code below shows our embeddings model instance. //create new embedding model instance const azOpenEmbedding = new AzureOpenAIEmbeddings({ azureADTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiEmbeddingsDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_EMBEDDING_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION, azureOpenAIBasePath: "https://eastus2.api.cognitive.microsoft.com/openai/deployments" }); The code in uploadDoc.js offers a simple way to do embeddings and store them to MongoDB. In this approach the text from the documents is loaded using the PDFLoader from Langchain community. The following code demonstrates how the embeddings are stored in the vector store. // Call the function and handle the result with await const storeToCosmosVectorStore = async () => { try { const documents = await returnSplittedContent() //create store instance const store = await MongoDBAtlasVectorSearch.fromDocuments( documents, azOpenEmbedding, { collection: vectorCollection, indexName: "myrag_index", textKey: "text", embeddingKey: "embedding", } ) if(!store){ console.log('Something wrong happened while creating store or getting store!') return false } console.log('Done creating/getting and uploading to store.') return true } catch (e) { console.log(`This error occurred: ${e}`) return false } } In this setup, Question Answering (QA) is achieved by integrating Azure OpenAI’s GPT-4o with MongoDB Vector Search through LangChain.js. The system processes user queries via an LLM (Large Language Model), which retrieves relevant information from a vectorized database, ensuring contextual and accurate responses. Azure OpenAI Embeddings convert text into dense vector representations, enabling semantic search within MongoDB. The LangChain RunnableSequence structures the retrieval and response generation workflow, while the StringOutputParser ensures proper text formatting. The most relevant code snippets to include are: AzureChatOpenAI instantiation, MongoDB connection setup, and the API endpoint handling QA queries using vector search and embeddings. There are some code snippets below to explain major parts of the code. Azure AI Chat Completion Model This is the model used in this implementation of RAG, where we use it as the model for chat completion. Below is a code snippet for it. const llm = new AzureChatOpenAI({ azTokenProvider, azureOpenAIApiInstanceName: process.env.AZURE_OPENAI_API_INSTANCE_NAME, azureOpenAIApiDeploymentName: process.env.AZURE_OPENAI_API_DEPLOYMENT_NAME, azureOpenAIApiVersion: process.env.AZURE_OPENAI_API_VERSION }) Using a Runnable Sequence to give out Chat Output This shows how a runnable sequence can be used to give out a response given the particular output format/ output parser added on to the chain. //Stream response app.post(`${process.env.BASE_URL}/az-openai/runnable-sequence/stream/chat`, async (req,res) => { //check for human message const { chatMsg } = req.body if(!chatMsg) return res.status(201).json({ message:'Hey, you didn\'t send anything.' }) //put the code in an error-handler try{ //create a prompt template format template const prompt = ChatPromptTemplate.fromMessages( [ ["system", `You are a French-to-English translator that detects if a message isn't in French. If it's not, you respond, "This is not French." Otherwise, you translate it to English.`], ["human", `${chatMsg}`] ] ) //runnable chain const chain = RunnableSequence.from([prompt, llm, outPutParser]) //chain result let result_stream = await chain.stream() //set response headers res.setHeader('Content-Type','application/json') res.setHeader('Transfer-Encoding','chunked') //create readable stream const readable = Readable.from(result_stream) res.status(201).write(`{"message": "Successful translation.", "response": "`); readable.on('data', (chunk) => { // Convert chunk to string and write it res.write(`${chunk}`); }); readable.on('end', () => { // Close the JSON response properly res.write('" }'); res.end(); }); readable.on('error', (err) => { console.error("Stream error:", err); res.status(500).json({ message: "Translation failed.", error: err.message }); }); }catch(e){ //deliver a 500 error response return res.status(500).json( { message:'Failed to send request.', error:e } ) } }) To run the front end of the code, go to your BASE_URL with the port given. This enables you to run the chatbot above and achieve similar results. The chatbot is basically HTML+CSS+JS. Where JavaScript is mainly used with fetch API to get a response. Thanks for reading. I hope you play around with the code and learn some new things. Additional Reads Introduction to LangChain.js Create an FAQ Bot on Azure Build a basic chat app in Python using Azure AI Foundry SDK View the full article
  14. Hi All I hope you are well. Anyway, on Android Enterprise Fully Managed devices, I have an ask to to enforce a No PIN No Device Access policy. These devices have the usual, where the PIN requirements are set with a device config policy and then checked with a corresponding compliance policy. But no where can I see "restrict use of the device til a PIN is set" setting. Perhaps it's really obvious but is this possible? Only obvious option I can is in the compliance policy settings on Actions for noncompliance as below: Would this be the appropriate setting or are there others? And if the device is locked, is the user able to set a PIN? Info appreciated. SK View the full article
  15. Video content has become an essential medium for communication, learning, and marketing. Microsoft 365 Copilot, combined with the Visual Creator Agent, is redefining the way professionals create videos. By leveraging AI-driven automation, users can generate high-quality videos with minimal effort. In this blog, we’ll explore how the Visual Creator Agent works within Microsoft 365 Copilot, its key features, and how you can use it to streamline video production. Full details in this blog https://dellenny.com/generating-videos-in-microsoft-365-copilot-using-visual-creator-agent/ View the full article
  16. Kirk Koenigsbauer is the COO of the Experiences + Devices division at Microsoft AI adoption is already happening in the workplace, but employees aren’t waiting for an official rollout. Our most recent Work Trend Index shows that 75% of employees are using AI at work, and 78% of them are bringing their own AI tools – largely consumer grade tools. This surge in AI adoption reflects clear demand for productivity gains, but unmanaged and often unsecured tools create real security and compliance risks. No organization wants confidential information inadvertently exposed or used to train external AI models. Leaders recognize the need for a secure, enterprise-wide AI solution that meets employee demand while ensuring data protection. However, some customers we meet with want to study ROI benefits before committing to a full AI subscription for every employee. That’s why we introduced Microsoft 365 Copilot Chat in January 2025. Copilot Chat provides free, secure AI chat powered by GPT-4o, giving organizations an immediate and compliant alternative to consumer AI tools. Employees get powerful AI access while IT retains control—without requiring additional subscription commitments. With enterprise-grade security, built-in compliance, and flexible pay-as-you-go AI agents, Copilot Chat allows organizations to experiment, scale, and validate AI’s impact. By offering employees a secure, discoverable, and powerful AI experience, organizations can embrace AI on their own terms—ensuring productivity gains without sacrificing security or overcommitting budgets. Copilot Chat is free* AI service for your whole organization Copilot Chat helps employees in every role work smarter and accomplish more. When they need to do Internet-based research to get their job done, they should use Copilot Chat to get up-to-date summarized insights with speed and accuracy, without leaking sensitive information outside the company data boundary. And that’s not all – employees can easily summarize, rewrite or get insights from files or documents by simply uploading them in chat and prompting Copilot. Enterprise data protection applies to prompts, responses and uploaded files, and they are stored securely to protect your organization’s data. Copilot Chat also offers Pages, a persistent, digital canvas within the chat experience that lets employees collaborate with Copilot to create durable business artifacts. Copilot is the UI for AI Copilot is your UI for AI—a single, user-friendly entry point where employees can readily access AI-powered agents without needing specialized technical knowledge. These agents help employees save time, boost productivity, and streamline daily tasks. Now, with Copilot Chat, these agents are available to all employees, even without a Microsoft 365 Copilot license—ensuring that AI-powered assistance is accessible across the organization. Employees can use agents to automate repetitive tasks, retrieve information from SharePoint and connected data sources, and support specialized workflows like customer service or field troubleshooting. They can also build their own agents using Agent Builder in Copilot Chat, while IT admins can create and manage organization-wide agents through Copilot Studio. With flexible pay-as-you-go access, organizations can integrate AI-powered automation at their own pace, deploying agents where they drive the most impact. Agents are priced based on metered consumption and can be managed through the Power Platform Admin Center or, coming soon, the Microsoft 365 Admin Center—see our documentation for more information. As businesses refine their AI strategy, they can easily scale usage and expand to full Microsoft 365 Copilot capabilities to maximize value. Enterprise-grade security, compliance, and privacy Copilot Chat offers enterprise data protection. That means it protects your data with encryption at rest and in transit, offers rigorous security controls, and maintains data isolation between tenants. Copilot Chat prompts and responses are protected by the same contractual terms and commitments widely trusted by our customers for their emails in Exchange and files in SharePoint, including support for GDPR, EUDB support, and our Data Protection Addendum. Prompts and responses are logged, can be managed with retention policies, and can be included in eDiscovery and other Purview capabilities. We also help safeguard against AI-focused risks such as harmful content and prompt injections. For content copyright concerns, we provide protected material detection and our Customer Copyright Commitment. Additionally, Copilot Chat offers granular controls and visibility over web grounded search, which enhances responses from the latest data from the web. Furthermore, you can have confidence that Copilot Chat is fully supported - just as you would expect from Microsoft’s enterprise services. Bringing Copilot Chat to your Organization Customers can start with either the free or paid experience in the Microsoft 365 Copilot app, available at M365Copilot.com or in the Windows, Android, or iPhone app stores. To help your organization get the most out of the new Microsoft 365 Copilot Chat, we’ve updated the Copilot Success Kit and added the new Copilot Chat and Agent Starter Kit. This includes: Copilot Chat and agent overview guide to help your users learn how to use agents to make Copilot Chat even more personalized and intelligent for their daily work. Copilot Chat and agent IT setup and controls guide to plan, deploy, manage, and measure Microsoft 365 Copilot Chat in your organization. User engagement templates in a variety of formats—including email, Viva Engage, and Teams—that you can leverage to communicate updates and new features to your users. *Available at no additional cost for all Entra account users with a Microsoft 365 subscription View the full article
  17. I have been loooking into mapping best practices about configuring hardening / tiering model from on-premises Active Directory to Microsoft Entra Domain Services (MEDS). I'm well aware that MEDS is NOT a replacemenet for AD DS and have many restrictions and missing features, but that does not stop me from wanting to make it as secure as possible for member servers to be joined to. Since MEDS is a PaaS in Azure, deployed from within Azure and managed in another way than Active Directory, of course there are different ways of implementering a good tiering model. In my study I wanted to see if I could enable Protected Users feature (join users to Protected Users Group). However I find this group to be present but not possible to add members to (feature greyed out). I have a member server in the MEDS instance and have installed AD DS Tools. My user is member of AD DDS Administrators group. I would like to know if anyone have some knowledge on the subject to share? View the full article
  18. This article describes how to create a report about group-based licensing assignments and any errors that might have occured. The code uses the Microsoft Graph PowerShell SDK to fetch information about the groups used for licensing assignments, interpret the assignments, find users with assignment errors, and send email to inform administrators about what's been found. https://practical365.com/group-based-licensing-report-email/ View the full article
  19. Hi folks - Mike Hildebrand here. Welcome to spring in the US - and another daylight-savings clock-change cycle for many of us (I find it odd that we just 'change time'). Lately, I've been having conversations with customers about 'custom image' support in Windows 365. Like most aspects of IT, an image management system for standardized PC deployments can range from the simple ('Next > Next > Finish') up to the very complex (tiers, workflows and automations). Here's my walk-through of the 'dip a toe in the pool' method for trying out the custom image capabilities of Windows 365. I shared a version of this guidance with customers and colleagues, and it was suggested that I share it with the masses ... so here you go. Step 1 - Create a VM in Azure I keep it plain and simple; a ‘disposable’ VM Start with a ‘Marketplace’ W365 Cloud PC image with the M365 Apps These have optimizations to ensure the best remoting experiences NOTE: I leave off monitoring agents, boot diagnostics, redundancy settings, etc. TIP: Consider creating a ‘dedicated’ new Resource Group for a given image process This makes cleaning up and reducing costs afterwards simple (which I'll cover at the end) IMPORTANT NOTE: When making/using an initial VM as the source for the custom image, ensure “Standard” is chosen for ‘Security type’ “Trusted launch virtual machine” is the default - and won’t work for this process - AND it CANNOT be reverted on a deployed VM Step 2 - Customize it; prep it Once the VM is created, login to it, customize it and then Sysprep it Apps, patches, customizations, local policy, etc. OOBE + ‘Generalize’ + ‘Shutdown’ NOTE: Sysprep may error out - issues such as Bitlocker being enabled, or an issue w/ one or more Store apps can cause this. If it happens, check the log, as indicated on your VM For the apps issue, a PS command similar to this resolves it for me, but check the specific log on your VM for the details: Get-AppxPackage *Microsoft.Ink.Handwriting.Main.* | Remove-AppxPackage Step 3 - Capture it Make sure the VM is stopped (it should be), then ‘Capture’ the image from the portal: The 'Subscription' you select (below) needs to be the same one as where your Windows 365 service lives Select ‘No, capture only a managed image.’ NOTE: In my lab, the image creation process takes around 15 minutes for a simple VM Step 4 - Import it Once the image is created, open the Intune portal and add it to Windows 365 via the 'Custom images' tab TIP: In my lab, the image import takes around 45 minutes NOTE: Up to 20 images can be stored here NOTE: the ‘Subscription’ you select (below) must match where you captured the image (above) or it won’t show up in the ‘Source image’ drop-down Step 5 - Use it After the image is imported into the Windows 365 service, it can be chosen from the Custom Image option in the Provisioning Policy wizard NOTE: If you attempt to delete a 'Custom image' that is configured in a Provisioning Policy, the deletion will fail NOTE: You can edit a Provisioning Policy and update/change the image, but that change will only affect new Cloud PCs provisioned from the Policy - it won't affect existing Cloud PCs spawned from that Policy. Cleanup The VM, disk, IP, etc. and the 'Managed image' you created/captured above will incur costs in Azure - but not the 'Custom image' you uploaded to Intune/W365 (image storage there is included as part of the service). After you import the 'Custom image' to W365, you can/should consider deleting the Resource Group you created in Step 1 (which contains everything associated with your disposable VM – the VM itself, the disk, NIC, the image you captured, etc.). !!! HUGE warning - triple-dog-verify the Resource Group before you delete it !!! Cheers folks! Hilde View the full article
  20. Hi all. What sounds like i should be simple is turning out to be not. In a domain environment, we naturally have lots of file shares. We want these shares to live on Share Point now, not on local servers. I can copy the data using Share Point Migration Tool, that bit is fine, we can also create Share Point sites for each share, set permissions on those sites no problem. How do we get it so that when a user logs into a domain PC, they automatically get those Share Point document libraries mapped in This PC? View the full article
  21. Hi, I have an old PC currently running Windows 11 22H2 and want to update it to 24H2. The issue is that this PC is not fully supported by Windows 11 as the CPU is unsupported and does not have TPM 2.0 chip. There is no update received on my computer so I have to install Windows 11 24H2 manually on this unsupported PC. By the way, it could be great to keep the apps and personal files. So I prefer an in-place upgrade to 24H2 other than clean install from USB drive. Is there any in-place upgrade solution to update Windows 11 22H2 to 24H2? Much appreciated if you could let me know how to do that. Thank you View the full article
  22. Whether you consider yourself a FinOps practitioner, someone who's enthusiastic about driving cloud efficiency and maximizing the value you get from the cloud or were just asked to look at ways to reduce cost, the FinOps toolkit has something for you. This month, you'll find a complete refresh of Power BI with a new design, greatly improved performance, and the ability to calculate reservation savings for both EA and MCA; FinOps hubs have a new Data Explorer dashboard and simpler public networking architecture; and many more small updates and improvements across the board. Read on for details! In this update: New to the FinOps toolkit Website refresh with documentation on Microsoft Learn Power BI report design refresh Calculating savings for both EA and MCA accounts Performance improvements for Power BI reports Important note for organizations that spend over $100K New Data Explorer dashboard for FinOps hubs About the FinOps hubs data model Simplified network architecture for public routing Managing exports and hubs with PowerShell Other new and noteworthy updates Thanking our community What's next New to the FinOps toolkit? In case you haven't heard, the FinOps toolkit is an open-source collection of tools and resources that help you learn, adopt, and implement FinOps in the Microsoft Cloud. The foundation of the toolkit is the Implementing FinOps guide that helps you get started with FinOps whether you're using native tools in the Azure portal, looking for ways to automate and extend those tools, or if you're looking to build your own FinOps tools and reports. To learn more about the toolkit, how to provide feedback, or how to contribute, see FinOps toolkit documentation. Website refresh with documentation on Microsoft Learn Before we get into each of the tool updates, I want to take a quick moment to call out an update to the FinOps toolkit website, which many of you are familiar with. Over the last few months, you may have noticed that we started moving documentation to Microsoft Learn. With that content migration final, we simplified the FinOps toolkit website to provide high-level details about each of the tools with links out to the documentation as needed. Nothing major here, but it is a small update that we hope will help you find the most relevant content faster. If you find there's anything we can do to streamline discovery of information or improve the site in general, please don't hesitate to let us know! And, as an open-source project, we're looking for people who have React development experience to help us expand this to include deployment and management experiences as well. If interested in this or any contribution, please email us at ftk-support@microsoft.com to get involved. Power BI report design refresh In the 0.8 release, Power BI reports saw some of the most significant updates we've had in a while. The most obvious one is the visual design refresh, which anyone who used the previous release will be able to spot immediately after opening the latest reports. The new reports align with the same design language we use in the Azure portal to bring a consistent, familiar experience. This starts on the redesigned Get started page for each report. The Get started page helps set context on what the report does and how to set it up. Select the Connect your data button for details about how to configure the report, in case you either haven't already set it up or need to make a change. If you run into any issues, select the Get help button at the bottom-right of the page for some quick troubleshooting steps. This provides some of the same steps as you'll find in the new FinOps toolkit help + support page. Moving past the Get started page, you'll also see that each report page was updated to move the filters to the left, making a little more room for the main visuals. As part of this update, we also updated all visuals across both the storage and KQL reports to ensure they both have the latest and greatest changes. I suppose the last thing I should call out is that every page now includes a “Give feedback” link. I'd like to encourage you to submit feedback via these links to let us know what works well and what doesn't. The feedback we collect here is an important part of how we plan and prioritize work. Alternatively, you're also welcome to create and vote on issues in our GitHub repository. Each release we'll strive to address at least one of the top 10 feedback requests, so this is a great way to let us know what's most important to you! Calculating savings for both EA and MCA accounts If you've ever tried to quantify cost savings or calculate Effective Savings Rate (ESR), you probably know list and contracted cost are not always available in Cost Management. Now, in FinOps toolkit 0.8, you can add these missing prices in Power BI to facilitate a more accurate and complete savings estimate. Before I get into the specifics, I should note that there are 3 primary ways to connect your data to FinOps toolkit Power BI reports. You can connect reports: Directly to FOCUS data exported to a storage account you created. To a FinOps hub storage account ingestion container. To a FinOps hub Data Explorer cluster. Each option provides additive benefits where FinOps hubs with Data Explorer offers the best performance, scalability, and functionality, like populating missing prices to facilitate cost savings calculations. This was available in FinOps hubs 0.7, so anyone who deployed FinOps hubs with Data Explorer need only export price sheets to take advantage of the feature. Unfortunately, storage reports didn't include the same option. That is, until the latest 0.8 release, which introduced a new Experimental: Add Missing Prices parameter. When enabled, the report combines costs and prices together to populate the missing prices and calculate more accurate savings. Please be aware that the reason this is labeled as “experimental” is because both the cost and price datasets can be large and combining them can add significant time to your data refresh times. If you're already struggling with slow refresh times, you may want to consider using FinOps hubs with Data Explorer. In general, we recommend FinOps hubs with Data Explorer for any account that monitors over $100K in total spend. (Your time is typically more valuable than the extra $125 per month.) To enable the feature, start by creating a Cost Management export for the price sheet. Then update parameters for your report to set the Experimental: Add Missing Prices parameter to true. Once enabled, you'll start to see additional savings from reservations. While this data is available in all reports, you can generally see savings on three pages within the Rate optimization report. The Summary page shows a high-level breakdown of your cost with the details that help you quantify negotiated discount and commitment discount savings. In this release, you'll also find Effective Savings Rate (ESR), which shows your total savings compared to the list cost (what you would have paid with no discounts). The Total savings page is new in this release and shows that same cost and savings breakdown over time. And lastly, the Commitment discount savings page shows gives you the clearest picture of the fix for MCA accounts by showing the contracted cost and savings for each reservation instance. If savings are important for your organization, try the new Add Missing Prices option and let us know how it works for you. And again, if you experience significant delays in data refresh times, consider deploying FinOps hubs with Data Explorer. This is our at-scale solution for everyone. Performance improvements for Power BI reports Between gradually increased load times for storage reports and learnings from the initial release of KQL reports in 0.7, we knew it was time to optimize both sets of reports. And we think you'll be pretty excited about the updates. For those using storage reports, we introduced a new Deprecated: Perform Extra Query Optimization parameter that disables some legacy capabilities that you may not even be using: Support for FOCUS 1.0-preview. Tracking data quality issues with the x_SourceChanges column. Fixing x_SkuTerm values to be numbers for MCA. Informative x_FreeReason column to explain why a row might have no cost. Unique name columns to help distinguish between multiple objects with the same display name. Most organizations aren't using these and can safely disable this option. For now, we're leaving this option enabled by default to give people time to remove dependencies. We do plan to disable this option by default in the future and remove the option altogether to simplify the report and improve performance. Cosmetic and informational transforms will be disabled by default in 0.9 and removed on or after July 1, 2025 to improve Power BI performance. If you rely on any of these changes, please let us know by creating an issue in GitHub to request that we extend this date or keep the changes indefinitely. For those using KQL reports that use FinOps hubs with Data Explorer, you'll notice a much more significant change. Instead of summarized queries with a subset of data, KQL reports now query the full dataset using a single query. This is made possible through a Power BI feature called DirectQuery. DirectQuery generates queries at runtime to streamline the ingestion process. What may take hours to pull data in a storage report takes seconds in KQL reports. The difference is astounding. Let me state this more explicitly: If you're struggling with long refresh times or need to setup incremental refresh on your storage reports, you should strongly consider switching to FinOps hubs with Data Explorer. You'll get full fidelity against the entire dataset with less configuration. Important note for organizations that spend over $100K I've already stated this a few times, but for those skimming the announcement, I want to share that we've learned a lot over the past few months as organizations big and small moved from storage to KQL reports in Power BI. With a base cost of $130 per month, we are now recommending that any organization who needs to monitor more than $100,000 in spend should deploy FinOps hubs with Data Explorer. While we won't remove storage as an option for those interested in a low-cost, low-setup solution, we do recognize that Data Explorer offers the best overall value to cost. And as we look at our roadmap, it's also important to note that Data Explorer will be critical as we expand to cover every FinOps capability. From allocation through unit economics, most capabilities require an analytical engine to break down, analyze, and even re-aggregate costs. At less than 0.2% of your total spend, we think you'll agree that the return is worth it. Most organizations see this as soon as they open a KQL report and it pulls data in seconds when they've been waiting for hours. Give it a shot and let us know what you think. We're always looking for ways to improve your experience. We think this is one of the biggest ways to improve and the great thing is it's already available! New Data Explorer dashboard for FinOps hubs With the addition of Data Explorer in FinOps hubs 0.7, we now have access to a new reporting tool built into Azure Data Explorer and available for free to all users! Data Explorer dashboards offer a lighter weight reporting experience that sits directly on the data layer, removing some of the complexities of Power BI reporting. Of course, Data Explorer dashboards aren't a complete replacement for Power BI. If you need to combine data from multiple sources, Power BI will still be the best option with its vast collection of connectors. This is just another option you have in your toolbelt. In fact, whether you use Power BI reports or not, we definitely recommend deploying the Data Explorer dashboard. Deploying the dashboard is easy. You import the dashboard from a file, connect it to your database, and you're ready to go! And once you setup the dashboard, you'll find pages organized in alignment with the FinOps Framework, similar to the Power BI reports. You'll find a few extra capabilities broken out in the dashboard compared to Power BI, but the functionality is generally consistent between the two, with some slight implementation differences that leverage the benefits of each platform. If you're familiar with the Power BI reports, you may notice that even this one screenshot is not directly comparable. I encourage you to explore what's available and make your own determination about which tool works best for you and your stakeholders. Before I move on to the next topic, let me call out my favorite page in the dashboard: The Data ingestion page. Similar to Power BI, the Data ingestion page includes details about the cost of FinOps hubs, but much more interesting than that is the ingested data, which is broken down per dataset and per month. This gives you an at-a-glance view of what data you have and what you don't! This level of visibility is immensely helpful when troubleshooting data availability or even deciding when it's time to expand to cover more historical data! Whether you choose to keep or replace your existing Power BI reports, we hope you'll try the Data Explorer dashboard and let us know what you think. They're free and easy to set up. To get started, see Configure the Data Explorer dashboard. About the FinOps hubs data model While on the subject of Data Explorer, I'd also like to call out some new, updated, and even deprecated KQL functions available in FinOps hubs as well as how to learn more about these and other functions and tables. I'll start by calling out that FinOps hubs with Data Explorer established a model for data ingestion that prioritizes backward compatibility. This may not be evident now, with only having support for FOCUS 1.0, but you will see this as we expand to support newer FOCUS releases. This is a lot to explain, so I won't get into it here, but instead I'll point you to where you can learn more at the end of this section. For now, let me say that you'll find two sets of functions in the Hub database: versioned and unversioned. For instance, Costs() returns all costs with the latest supported FOCUS schema (version 1.0 today), while Costs_v1_0() will always return FOCUS 1.0 data. This means that, if we were to implement FOCUS 1.1, Costs() would return FOCUS 1.1 data and Costs_v1_0() would continue to return FOCUS 1.0, whether the data was ingested with 1.0, 1.1, or even 1.0-preview, which we continue to support. I can cover this more in-depth in a separate blog post. There's a lot to versioning and I'm very proud of what we're doing here to help you balance up-to-date tooling without impacting existing reports. (This is another benefit of KQL reports over storage reports.) The key takeaway here is to always use versioned functions for tooling and reports that shouldn't change over time, and use unversioned functions for ad-hoc queries where you always want the latest schema. Beyond these basic data access functions, we also offer 15 helper functions for common reporting needs. I won't go over them all here, but will call out a few additions, updates, and replacements. Most importantly, we identified some performance and memory issues with the parse_resourceid() function when run at scale for large accounts. We resolved the issue by extracting a separate resource_type() function for looking up resource type display names for resources. This is mostly used within internal data ingestion, but also available for your own queries. The main callout is that, if you experienced any memory issues during data ingestion in 0.7, please look at 0.8. We're seeing some amazing performance and scale numbers with the latest update. As you can imagine, FinOps reports use a lot of dates. And with that, date formatting is mandatory. In 0.8, we renamed the daterange() function to datestring() to better represent its capabilities and also extracted a new monthstring() function for cases when you only need the month name. datestring(datetime, [datetime]) returns a formatted date or date range abbreviated based on the current date (e.g., “Jan 1”, “Jan-Feb 2025”, “Dec 15, 2024-Jan 14, 2025”). monthstring(datetime, [length]) returns the name of the month at a given string length (e.g., default = “January”, 3 = “Jan”, 1 = “J”). We also updated the numberstring() function to support decimal numbers. (You can imagine how that might be important for cost reporting!) numberstring(num, [abbrev]) returns a formatted string representation of the number based on a few simple rules that only show a maximum of three numbers and a magnitude abbreviation (e.g., 1234 = “1.23K”, 12345678 = “12.3M”). And of course, these are just a few of the functions we have available. To learn more about the data model available in Power BI or Data Explorer, see FinOps hubs data model. This article will share details about managed datasets in FinOps hubs, Power BI functions used in both KQL and storage reports, Power BI tables, and KQL functions available in both Power BI and Data Explorer dashboards. If you're curious about the tables, functions, and even details about how versioning works, this will be a good reference to remember. Simplified network architecture for public routing In 0.7, we introduce a much-anticipated feature to enable FinOps hubs with private network routing (aka, private endpoints). As part of this update, we added all FinOps hubs components into a dedicated, isolated network for increased security. And after the release, we started to receive immediate feedback from those who prefer the original public routing option from 0.6 and before, which was not hosted within an isolated network. Based on this feedback, we updated the public routing option to exclude networking components. This update simplifies the deployment and better aligns with what most organizations are looking for when using public routing: We also published new documentation to explain both the public and private routing options in detail. If you're curious about the differences or planning to switch to one or the other, you'll want to start with Configure private networking in FinOps hubs. Configuring private networking requires some forethought, so we recommend you engage your network admins early to streamline the setup process, including peering and routing from your VPN into the isolated FinOps hubs network. I also want to take a quick moment to thank everyone who shared their feedback about the networking changes. This was an amazing opportunity to see our tiny open-source community come together. We rallied, discussed options openly, and pivoted our direct to align with the community's preferred design direction. I'm looking forward to many more open discussions and decisions like this. The FinOps toolkit is for the community, by the community, and this has never been more apparent than over the last few months. Thank you all for making this community shine! Managing exports and hubs with PowerShell We probably don't do a good enough job raising awareness about the FinOps toolkit PowerShell module. Every time I introduce people to it, they always come back to me glowing with feedback about how much time it saved them. And with that, we made some small tweaks based on feedback we heard from FinOps toolkit users. Specifically, we updated commands for creating and reading Cost Management exports, and deleting FinOps hubs. Let's start with exports… The New-FinOpsCostExport command creates a new export. But it's not just a simple create call, like most PowerShell commands. One of the more exciting options is the -Backfill option, which allows you to backfill historical data up to 7 years with a single call! But this isn't new. In 0.8, we updated New-FinOpsCostExport to create price sheet, reservation recommendation, and reservation transaction exports. With this, we added some new options for reservation recommendations and system-assigned identities. The Get-FinOpsCostExport command retrieves all exports on the current scope based on a set of filters. While updating other commands, we updated the command to return a more comprehensive object and renamed some of the properties to be clearer about their intent. And just to call out another popular command: The Start-FinOpsCostExport command allows you to run exports for an existing export. This is most often used when backfilling FinOps hubs but works in any scenario. This command is what's used in the New-FinOpsCostExport command. Lastly, we were asked to improve the confirmation experience for the Remove-FinOpsHub command (#1187). Now, the command shows a list of resources that will be deleted before confirming delete. Simple, but helpful. There's a lot we can do with PowerShell. So much it's hard to know where to go next. If you find yourself looking for anything in particular, please don't hesitate to let us know! We're generally waiting for a signal from people like you who need not just automation scripts, but any tools in the FinOps space. So if you find something missing, create an issue to let us know how we can help! Other new and noteworthy updates Many small improvements and bug fixes go into each release, so covering everything in detail can be a lot to take in. But I do want to call out a few other small things that you may be interested in. In the Implementing FinOps guide: Added the Learning FOCUS blog series to the FOCUS overview doc. In FinOps hubs: Clean up ResourceType values that have internal resource type IDs (for example, microsoft.compute/virtualmachines). Updated the default setting for Data Explorer trusted external tenants from “All tenants” to “My tenant only”. This change may cause breaking issues for Data Explorer clusters accessed by users from external tenants. Updated CommitmentDiscountUsage_transform_v1_0() to use parse_resourceid(). Documentation updates covering required permissions and supported datasets. Fixed timezones for Data Factory triggers to resolve issue where triggers would not start due to unrecognized timezone. Fixed an issue where x_ResourceType is using the wrong value. This fix resolves the issue for all newly ingested data. To fix historical data, reingest data using the ingestion_ExecuteETL Data Factory pipeline. Added missing request body to fix the false positive config_RunExportJobs pipeline validation errors in Data Factory. Deprecated the monthsago() KQL function. Please use the built-in startofmonth(datetime, [offset]) function instead. In Power BI reports: Added the Pricing units open dataset to support price sheet data cleanup. Added PricingUnit and x_PricingBlockSize columns to the Prices table. Added Effective Savings Rate (ESR) to Cost summary and Rate optimization reports. Expanded the columns in the commitment discount purchases page and updated to show recurring purchases separately. Fixed a date handling bug that resulted in a “We cannot apply operator >= to types List and Number” error (#1180). If you run into issues, set the report locale explicitly to the locale of the desired date format. In FinOps workbooks: On the Optimization workbook Commitment discounts tab, added Azure Arc Windows license management. On the Optimization workbook, Enabled “Export to CSV” option on the Idle backupsquery. On the Optimization workbook, Corrected VM processor details on the Computetab query. In Azure optimization engine: Improved multi-tenancy support with Azure Lighthouse guidance. In open data: Added 4 new region mappings to existing regions. Added the “1000 TB” pricing unit. Added 45 new and updated 52 existing resource types. Added 4 new resource type to service mappings. Thanking our community As we approach the two-year anniversary of our first public release, I have to look back and acknowledge how far we've come. We all want to do more and move faster, which makes it easy to get lost in the day-to-day work our community does and lose sight of the progress we're making. There are honestly too many people to thank, so I won't go into listing everyone, but I do want to send out an extra special thank you to the non-Microsoft contributors who are making this community and its tools better. I'll start the list off strong with Roland Krummenacher, a consultant who specializes in Azure optimization. He and his team built a tool similar to FinOps hubs and, after seeing 0.7 ship with Data Explorer, rearchitected their tool to extend FinOps hubs. Roland's team helps clients optimize their environment and build custom extensions to FinOps hubs that drive value realization. We're collaborating regularly to build a plan on how to bring some of their extensions into the toolkit. Several 0.8 improvements were made because of our collaboration with Roland. Next up is Graham Murphy, a FinOps professional who's been using FinOps hubs since the early days. Graham has always been amazingly collaborative. He extended FinOps hubs to bring in GCP and AWS FOCUS data and often shares his experiences with the FinOps community on Slack. Graham is also part of the FOCUS project, which has also proven useful. Speaking of FOCUS, Brian Wyka is an engineer who provided some feedback on our FOCUS documentation. But what impressed me most is that not only did Brian give us feedback, but he also engaged deeply in our pull request to address his feedback. It was amazing to see him stick to the topic through to the end. Similar to Graham, John Lundell is a FinOps practitioner who also extended FinOps hubs and is sharing his experiences with the community. John took the time to document his approach for using FinOps hubs to get data into Microsoft Fabric. For those interested, check out Sharing how we are enhancing the toolkit for bill-back purposes. Eladio Rincón Herrera has been with us for over a year now. The thing that really stands out to me about Eladio is the depth in which he gives context. This has helped immensely in a few times as we've narrowed down issues that not only he, but others were facing. Eladio's engagement in our discussion forums has helped many others both directly and indirectly. It's always a pleasure to work with Eladio! Psilantropy has also been with us for over a year. They have been quite prolific over that time as well, sharing ideas, issues, and supporting discussions across four separate tools! Their reports are always extremely detailed and immensely helpful in pinpointing the underlying problem or fully understanding the desired feature request. And now for someone who holds a special place in my heart: Patrick K. Patrick is an architect who leveraged FinOps hubs within his organization and needed to add private endpoints. He took the time to submit a pull request to contribute those changes back to the product. This was our first major external pull request, which is what made it so special. This spun up many discussions and debates on approaches that took time to get in, but I always look back to Patrick as the one who really kickstarted the effort with that first pull request! Of course, this isn't everyone. I had to trim the list of people down a few times to really focus on a select few. (I'm sure I'll feel guilty about skipping someone later!) And that doesn't even count all the Microsoft employees who make the FinOps toolkit successful – both in contributions and through supporting the community. I'm truly humbled when I see how this community has grown and continues to thrive! Thank you all! What's next As we rounded out 2024, I have to say I was quite proud of what we were able to achieve. And coming into 2025, I was expecting a lightweight initial release. But we ended up doing much more than we expected, which is great. We saw some amazing (and unexpected) improvements in this release. And while I'd love to say we're going to focus on small updates, I have to admit we have some lofty goals. Here are a few of the things we're looking at in the coming months: FinOps hubs will add support for ingesting data into Microsoft Fabric eventhouses and introduce recommendations, similar to what you see in Azure Optimization Engine and FinOps workbooks. Power BI reports will add support for Microsoft Fabric lakehouses. FinOps hubs and Power BI will both get updated to the latest FOCUS release. FinOps workbooks will continue to get recurring updates, expand to more FinOps capabilities, and add cost from FinOps hubs. Azure Optimization Engine will continue to receive small updates as we begin to bring some capabilities into FinOps hubs in upcoming releases. Each release, we'll try to pick at least one of the highest voted issues (based on 👍 votes) to continue to evolve based on your feedback, so keep the feedback coming! To learn more, check out the FinOps toolkit roadmap, and please let us know if there's anything you'd like to see in a future release. Whether you're using native products, automating and extending those products, or using custom solutions, we're here to help make FinOps easier to adopt and implement. View the full article
  23. Outlook Newsletters are intended for internal communications, at least for the preview. It’s possible to take the HTML for a newsletter and send it with Azure Email Communication Services (ECS), the PAYG service for bulk email. It sounds like a good way to use Outlook Newsletters to share information with customers and other external recipients. Some manual intervention makes everything works. It would be great if Microsoft tweaked Outlook to remove the rough edges. https://office365itpros.com/2025/03/12/outlook-newsletters-ecs/ View the full article
  24. Who has more details? View the full article
  25. Hey everyone... Not a big techy person and I'm at the point of frustration where i have no idea what's causing these BSOD anymore.. I've tried numerous things on youtube/google to solve the issues but they keep happening i would really appreciate some help if possible to try and solve this issue. I will randomly blue screen or my pc will restart sometimes when I'm gaming or even just watching a youtube video. At first i thought it was just my ram so i went out and spent 150$ on new ram.... but it has continued to keep getting BSOD even after the new sticks went in. If I'm lucky i can go a few hours without a bluescreen but sometimes if i blue screen it will do it every 5 minutes or so until it gives up and lets me play for a while - Im not sure if there is a setting that's wrong in my pc or anything but i have tried many things to even check for outdated drivers and to my knowledge i have it all updated. I have even tried resetting my pc and keeping personal files.. View the full article
×
×
  • Create New...