Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5694
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Join our next community call on March 25, 2025. There were so many questions about the new features in SharePoint Pages and News, especially around flexible sections, that we're bringing a SharePoint PM expert to deep dive on this topic. In case you missed the announcement in our January call, we are moving our call format back to Teams! We've heard your feedback loud and clear and moved to Teams webinars starting February 2025, which means you must register to ensure you will be able to join the call when it starts. An on-demand recording will still be available in our Driving Adoption > Events section, as well as on our Microsoft Community Learning YouTube channel. The calls will still start at 5 minutes past the hour for both sessions (at 8:05 AM and 5:05 PM PT), and it will still end at the top of the hour (9:00 AM and 6:00 PM PT, respectively). The join links for both sessions remain the same: https://aka.ms/M365ChampionCallAM https://aka.ms/M365ChampionCallPM Since our calls are still open to everyone, you must be a member of the Microsoft 365 Champion Program in order to access the presentation materials - the access link is in the initial welcome email and the monthly newsletter emails. If you have not yet joined our Champion community, sign up here to get access to the monthly newsletters, calendar invites, and program assets (e.g., the presentations). View the full article
  2. Join our next community call on March 25, 2025, to deep dive into the new features in SharePoint Pages and News, including flexible sections! Host: Tiffany Lee Guest: Katelyn Helms Moderator: Jessie Hwang 📢 NOTE: our community call format has changed to using Teams webinars to enable more dynamic discussions! Join link is still the same but you must register to be able to join the call when it starts: https://aka.ms/M365ChampionCallPM ⏰ 🗨️ Each call includes an open Q&A discussion section, where you'll have a chance to ask your questions about Microsoft 365. Our new call format will make this easier! 👋 Join the Microsoft 365 Champion program today! Champions combine technical acumen with people skills to drive meaningful change. Our community calls are open to everyone but only Champions have access to the presentation resources (access link in the initial welcome email and in the monthly newsletters). Join now: https://aka.ms/M365Champions. View the full article
  3. Join our next community call on March 25, 2025, to deep dive into the new features in SharePoint Pages and News, including flexible sections! Host: Tiffany Lee Guest: Katelyn Helms Moderator: Jessie Hwang 📢 NOTE: our community call format has changed to using Teams webinars to enable more dynamic discussions! Join link is still the same but you must register to be able to join the call when it starts: https://aka.ms/M365ChampionCallAM ⏰ 🗨️ Each call includes an open Q&A discussion section, where you'll have a chance to ask your questions about Microsoft 365. Our new call format will make this easier! 👋 Join the Microsoft 365 Champion program today! Champions combine technical acumen with people skills to drive meaningful change. Our community calls are open to everyone but only Champions have access to the presentation resources (access link in the initial welcome email and in the monthly newsletters). Join now: https://aka.ms/M365Champions. View the full article
  4. Hello all, I am in need of an excel formula. I cannot find help on exactly what I am hoping to do, and that could be because I don't know the key words to search it. I have a document where annual tasks were completed, that date was put into an excel sheet. I need those cells to turn green when it's been 6 months from when the task was done and remain green until it turns yellow when it's been 7-9 months from when the task was done, and then it will turn red when it's been 10-12 months since the task was last done. Any help with this is so very appreciated! View the full article
  5. Join us for our very first Technology Cohort session featuring perspectives from the People Science team and Customer Experience teams to discuss current industry trends and key challenges our clients are currently facing. Session will be recorded and posted to the Technology private user group. When you join this event, your name, email address and/or phone number may be viewable by other session participants in the attendee list. By joining, you’re agreeing to this experience. View the full article
  6. Webinar Registration: HERE Join us for our very first cohort meeting bringing together HR, IT, and other business leaders to create a People-Centric approach to AI Transformation. This session will feature perspectives from the People Science team and Customer Experience teams, discussing current AI trends and key challenges our clients are currently facing. Session will be recorded and shared with registrants. When you join this event, your name, email address and/or phone number may be viewable by other session participants in the attendee list. By joining, you’re agreeing to this experience. View the full article
  7. A few days ago, I was working on a case where a customer reported an unexpected behavior in their application: even after switching the connection policy from Proxy to Redirect, the connections were still using Proxy mode. After investigating, we found that the customer was using connection pooling, which caches connections for reuse. This meant that even after changing the connection policy, the existing connections continued using Proxy mode because they had already been established with that setting. The new policy would only apply to newly created connections, not the ones being reused from the pool. To confirm this, we ran a test using .NET and Microsoft.Data.SqlClient to analyze how the connection pool behaves and whether connections actually switch to Redirect mode when the policy changes. How Connection Pooling Works Connection pooling is designed to reuse existing database connections instead of creating a new one for every request. This improves performance by reducing latency and avoiding unnecessary authentication handshakes. However, once a connection is established, it is cached with the original settings, including: Connection policy (Proxy or Redirect) Authentication mode Connection encryption settings This means that if you change the connection policy but reuse a pooled connection, it will retain its original mode. The only way to apply the new policy is to create a new physical connection that does not come from the pool. Testing Connection Pooling Behavior For Testing the connection pooling behavior, I developed this small code in C# that basically, opens the connection, provides information about the port using and close the connection. Repeating this process 10000 times. The idea was to track active connections and check if the port and connection policy were changing after modifying the connection policy. Initially, I attemped to use netstat -ano to track active connections and monitor the local port used by each session. Unfortunately, in Azure SQL Database, local port information is not reported, making it difficult to confirm whether a connection was truly being reused at the OS level. Despite this limitation, by analyzing the session behavior and connection reuse patterns, we were able to reach a clear conclusion. using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.Threading; using System.Threading.Tasks; using Microsoft.Data.SqlClient; namespace InfoConn { using System; using System.Data; using System.Diagnostics; using System.Text.RegularExpressions; using System.Threading; using Microsoft.Data.SqlClient; class Program { static void Main() { string connectionStringProxy = "Server=tcp:servername.database.windows.net,1433;Database=db1;User Id=user1;Password=..;Pooling=True;"; Console.WriteLine("Starting Connection Pooling Test"); for (int i = 0; i < 10000; i++) { using (SqlConnection conn = new SqlConnection(connectionStringProxy)) { conn.Open(); ShowConnectionDetails(conn, i); } Thread.Sleep(5000); } Console.WriteLine("Test complete."); } static void ShowConnectionDetails(SqlConnection conn, int attempt) { string query = "SELECT session_id, client_net_address, local_net_address, auth_scheme FROM sys.dm_exec_connections WHERE session_id = @@SPID;"; using (SqlCommand cmd = new SqlCommand(query, conn)) { using (SqlDataReader reader = cmd.ExecuteReader()) { while (reader.Read()) { Console.WriteLine($"[Attempt {attempt + 1}] Session ID: {reader["session_id"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Client IP: {reader["client_net_address"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Local IP: {reader["local_net_address"]}"); Console.WriteLine($"[Attempt {attempt + 1}] Auth Scheme: {reader["auth_scheme"]}"); } } } RetrievePortInformation(attempt); } static void RetrievePortInformation(int attempt) { try { int currentProcessId = Process.GetCurrentProcess().Id; Console.WriteLine($"[Attempt {attempt + 1}] PID: {currentProcessId}"); string netstatOutput = RunNetstatCommand(); var match = Regex.Match(netstatOutput, $@"\s*TCP\s*(\S+):(\d+)\s*(\S+):(\d+)\s*ESTABLISHED\s*{currentProcessId}"); if (match.Success) { string localAddress = match.Groups[1].Value; string localPort = match.Groups[2].Value; string remoteAddress = match.Groups[3].Value; string remotePort = match.Groups[4].Value; Console.WriteLine($"[Attempt {attempt + 1}] Local IP: {localAddress}"); Console.WriteLine($"[Attempt {attempt + 1}] Local Port: {localPort}"); Console.WriteLine($"[Attempt {attempt + 1}] Remote IP: {remoteAddress}"); Console.WriteLine($"[Attempt {attempt + 1}] Remote Port: {remotePort}"); } else { Console.WriteLine($"[Attempt {attempt + 1}] No active TCP connection found in netstat."); } } catch (Exception ex) { Console.WriteLine($"[Attempt {attempt + 1}] Error retrieving port info: {ex.Message}"); } } static string RunNetstatCommand() { using (Process netstatProcess = new Process()) { netstatProcess.StartInfo.FileName = "netstat"; netstatProcess.StartInfo.Arguments = "-ano"; netstatProcess.StartInfo.RedirectStandardOutput = true; netstatProcess.StartInfo.UseShellExecute = false; netstatProcess.StartInfo.CreateNoWindow = true; netstatProcess.Start(); string output = netstatProcess.StandardOutput.ReadToEnd(); netstatProcess.WaitForExit(); return output; } } } } View the full article
  8. Dear All, I start by saying that I don't know whether this matter (and issue for me) was already raised here or somewhere else, I couldn't find anything relevant to me. I have deployed the DeleteBlobLogicApp as explained here: https://learn.microsoft.com/en-us/azure/defender-for-cloud/defender-for-storage-configure-malware-scan However, there seem to be issues with the workflow actions. Firstly, the trigger event action was good, but then it failed at the next action "GetBlobEntity": the error that it returned was: "The 'from' property value in the 'query' action inputs is of type 'Null'. The value must be an array.because of that the blob is not deleted from the container" which I fixed, thanks mostly to Copilot as I am not programmer or developer, by changing the relevant code part like so: { "type": "Query", "inputs": { "from": "@if(empty(triggerBody()?['Entities']), json('[]'), triggerBody()?['Entities'])", "where": "@equals(item().type, 'blob')" }, "runAfter": {} } But then I got stuck and had to give up at the next action "Delete Blob": Here the returned error was: InvalidTemplate. Unable to process template language expressions in action 'Delete_Blob' inputs at line '0' and column '0': 'The template language expression 'body('GetBlobEntity')[0].Url' cannot be evaluated because array index '0' cannot be selected from empty array. The action's code part is the following: { "type": "Http", "inputs": { "uri": "@{body('GetBlobEntity')[0].Url}", "method": "DELETE", "headers": { "x-ms-version": "2019-07-07" }, "authentication": { "audience": "https://@{triggerBody()? ['CompromisedEntity']}.blob.core.windows.net/", "type": "ManagedServiceIdentity" } }, "runAfter": { "GetBlobEntity": [ "Succeeded" ] } } Do you have any clue on how to code it correctly? Thank you very much and I am sorry if it wasn't the right discussion board! Gianluca View the full article
  9. Introduction In today's digital landscape, rapid innovation—especially in areas like AI—is reshaping how we work and interact. With this progress comes a growing array of cyber threats and gaps that impact every organization. Notably, the convergence of AI, data security, and digital assets has become particularly enticing for bad actors, who leverage these advanced tools and valuable information to orchestrate sophisticated attacks. Security is far from an optional add-on; it is the strategic backbone of modern business operations and resiliency. The evolving threat landscape Cyber threats are becoming more sophisticated and persistent. A single breach can result in costly downtime, loss of sensitive data, and damage to customer trust. Organizations must not only detect incidents but also proactively prevent them –all while complying with regulatory standards like GDPR and HIPAA. Security requires staying ahead of threats and ensuring that every critical component of your digital environment is protected. Azure Firewall: Strengthening security for all users Azure Firewall is engineered and innovated to benefit all users by serving as a robust, multifaceted line of defense. Below are five key scenarios that illustrate how Azure Firewall provides security across various use cases: First, Azure Firewall acts as a gateway that separates the external world from your internal network. By establishing clearly defined boundaries, it ensures that only authorized traffic can flow between different parts of your infrastructure. This segmentation is critical in limiting the spread of an attack, should one occur, effectively containing potential threats to a smaller segment of the network. Second, the key role of the Azure Firewall is to filter traffic between clients, applications, and servers. This filtering capability prevents unauthorized access, ensuring that hackers cannot easily infiltrate private systems to steal sensitive data. For instance, whether protecting personal financial information or health data, the firewall inspects and controls traffic to maintain data integrity and confidentiality. Third, beyond protecting internal Azure or on-premises resources, Azure Firewall can also regulate outbound traffic to the Internet. By filtering user traffic from Azure to the Internet, organizations can prevent employees from accessing potentially harmful websites or inadvertently downloading malicious content. This is supported through FQDN or URL filtering, as well as web category controls, where administrators can filter traffic to domain names or categories such as social media, gambling, hacking, and more. In addition, security today means staying ahead of threats, not just controlling access. It requires proactively detecting and blocking malicious traffic before it even reaches the organization’s environment. Azure Firewall is integrated with Microsoft’s Threat Intelligence feed, which supplies millions of known malicious IP addresses and domains in real time. This integration enables the firewall to dynamically detect and block threats as soon as they are identified. In addition, Azure Firewall IDPS (Intrusion Detection and Prevention System) extends this proactive defense by offering advanced capabilities to identify and block suspicious activity by: Monitoring malicious activity: Azure Firewall IDPS rapidly detects attacks by identifying specific patterns associated with malware command and control, phishing, trojans, botnets, exploits, and more. Proactive blocking: Once a potential threat is detected, Azure Firewall IDPS can automatically block the offending traffic and alert security teams, reducing the window of exposure and minimizing the risk of a breach. Together, these integrated capabilities ensure that your network is continuously protected by a dynamic, multi-layered defense system that not only detects threats in real time but also helps prevent them from ever reaching your critical assets. Image: Trend illustrating the number of IDPS alerts Azure Firewall generated from September 2024 to March 2025 Finally, Azure Firewall’s cloud-native architecture delivers robust security while streamlining management. An agile management experience not only improves operational efficiency but also frees security teams to focus on proactive threat detection and strategic security initiatives by providing: High availability and resiliency: As a fully managed service, Azure Firewall is built on the power of the cloud, ensuring high availability and built-in resiliency to keep your security always active. Autoscaling for easy maintenance: Azure Firewall automatically scales to meet your network’s demands. This autoscaling capability means that as your traffic grows or fluctuates, the firewall adjusts in real time—eliminating the need for manual intervention and reducing operational overhead. Centralized management with Azure Firewall Manager: Azure Firewall Manager provides centralized management experience for configuring, deploying, and monitoring multiple Azure Firewall instances across regions and subscriptions. You can create and manage firewall policies across your entire organization, ensuring uniform rule enforcement and simplifying updates. This helps reduce administrative overhead while enhancing visibility and control over your network security posture. Seamless integration with Azure Services: Azure Firewall’s strong integration with other Azure services, such as Microsoft Sentinel, Microsoft Defender, and Azure Monitor, creates a unified security ecosystem. This integration not only enhances visibility and threat detection across your environment but also streamlines management and incident response. Conclusion Azure Firewall's combination of robust network segmentation, advanced IDPS and threat intelligence capabilities, and cloud-native scalability makes it an essential component of modern security architectures—empowering organizations to confidently defend against today’s ever-evolving cyber threats while seamlessly integrating with the broader Azure security ecosystem. View the full article
  10. Save the date and join us for our next monthly Windows Office Hours, on March 20th from 8:00-9:00a PT. We will have a broad group of product experts, servicing experts, and engineers representing Windows, Microsoft Intune, Configuration Manager, Windows 365, Windows Autopilot, security, public sector, FastTrack, and more. They will be standing by -- in chat -- to provide guidance, discuss strategies and tactics, and, of course, answer any specific questions you may have. For more details about how Windows Office Hours works, go to our Windows IT Pro Blog. If 8:00 a.m. Pacific Time doesn't work for you, post your questions on the Windows Office Hours: March 20 event page, up to 48 hours in advance. Hope you can join us! View the full article
  11. The OpenAI Agents SDK provides a powerful framework for building intelligent AI assistants with specialised capabilities. In this blog post, I'll demonstrate how to integrate Azure OpenAI Service and Azure API Management (APIM) with the OpenAI Agents SDK to create a banking assistant system with specialised agents. Key Takeaways: Learn how to connect the OpenAI Agents SDK to Azure OpenAI Service Understand the differences between direct Azure OpenAI integration and using Azure API Management Implement tracing with the OpenAI Agents SDK for monitoring and debugging Create a practical banking application with specialized agents and handoff capabilities The OpenAI Agents SDK The OpenAI Agents SDK is a powerful toolkit that enables developers to create AI agents with specialised capabilities, tools, and the ability to work together through handoffs. It's designed to work seamlessly with OpenAI's models, but can be integrated with Azure services for enterprise-grade deployments. Setting Up Your Environment To get started with the OpenAI Agents SDK and Azure, you'll need to install the necessary packages: pip install openai openai-agents python-dotenv You'll also need to set up your environment variables. Create a `.env` file with your Azure OpenAI or APIM credentials: For Direct Azure OpenAI Connection: # .env file for Azure OpenAI AZURE_OPENAI_API_KEY=your_api_key AZURE_OPENAI_API_VERSION=2024-08-01-preview AZURE_OPENAI_ENDPOINT=https://your-resource-name.openai.azure.com/ AZURE_OPENAI_DEPLOYMENT=your-deployment-name For Azure API Management (APIM) Connection: # .env file for Azure APIM AZURE_APIM_OPENAI_SUBSCRIPTION_KEY=your_subscription_key AZURE_APIM_OPENAI_API_VERSION=2024-08-01-preview AZURE_APIM_OPENAI_ENDPOINT=https://your-apim-name.azure-api.net/ AZURE_APIM_OPENAI_DEPLOYMENT=your-deployment-name Connecting to Azure OpenAI Service The OpenAI Agents SDK can be integrated with Azure OpenAI Service in two ways: direct connection or through Azure API Management (APIM). Option 1: Direct Azure OpenAI Connection from openai import AsyncAzureOpenAI from agents import set_default_openai_client from dotenv import load_dotenv import os # Load environment variables load_dotenv() # Create OpenAI client using Azure OpenAI openai_client = AsyncAzureOpenAI( api_key=os.getenv("AZURE_OPENAI_API_KEY"), api_version=os.getenv("AZURE_OPENAI_API_VERSION"), azure_endpoint=os.getenv("AZURE_OPENAI_ENDPOINT"), azure_deployment=os.getenv("AZURE_OPENAI_DEPLOYMENT") ) # Set the default OpenAI client for the Agents SDK set_default_openai_client(openai_client) Option 2: Azure API Management (APIM) Connection from openai import AsyncAzureOpenAI from agents import set_default_openai_client from dotenv import load_dotenv import os # Load environment variables load_dotenv() # Create OpenAI client using Azure APIM openai_client = AsyncAzureOpenAI( api_key=os.getenv("AZURE_APIM_OPENAI_SUBSCRIPTION_KEY"), # Note: Using subscription key api_version=os.getenv("AZURE_APIM_OPENAI_API_VERSION"), azure_endpoint=os.getenv("AZURE_APIM_OPENAI_ENDPOINT"), azure_deployment=os.getenv("AZURE_APIM_OPENAI_DEPLOYMENT") ) # Set the default OpenAI client for the Agents SDK set_default_openai_client(openai_client) Key Difference: When using Azure API Management, you use a subscription key instead of an API key. This provides an additional layer of management, security, and monitoring for your OpenAI API access. Creating Agents with the OpenAI Agents SDK Once you've set up your Azure OpenAI or APIM connection, you can create agents using the OpenAI Agents SDK: from agents import Agent from openai.types.chat import ChatCompletionMessageParam # Create a banking assistant agent banking_assistant = Agent( name="Banking Assistant", instructions="You are a helpful banking assistant. Be concise and professional.", model="gpt-4o", # This will use the deployment specified in your Azure OpenAI/APIM client tools=[check_account_balance] # A function tool defined elsewhere ) The OpenAI Agents SDK automatically uses the Azure OpenAI or APIM client you've configured, making it seamless to switch between different Azure environments or configurations. Implementing Tracing with Azure OpenAI The OpenAI Agents SDK includes powerful tracing capabilities that can help you monitor and debug your agents. When using Azure OpenAI or APIM, you can implement two types of tracing: 1. Console Tracing for Development from agents.tracing.processors import ConsoleSpanExporter, BatchTraceProcessor from agents.tracing import set_default_trace_processor # Set up console tracing console_exporter = ConsoleSpanExporter() console_processor = BatchTraceProcessor(exporter=console_exporter) set_default_trace_processor(console_processor) 2. OpenAI Tracing for Production Monitoring from agents.tracing.processors import OpenAITracingExporter, BatchTraceProcessor from agents.tracing import set_default_trace_processor import os # Set up OpenAI tracing openai_exporter = OpenAITracingExporter(api_key=os.getenv("OPENAI_TRACING_API_KEY")) openai_processor = BatchTraceProcessor(exporter=openai_exporter) set_default_trace_processor(openai_processor) Tracing is particularly valuable when working with Azure deployments, as it helps you monitor usage, performance, and behavior across different environments. Running Agents with Azure OpenAI To run your agents with Azure OpenAI or APIM, use the Runner class from the OpenAI Agents SDK: from agents import Runner import asyncio async def main(): # Run the banking assistant result = await Runner.run( banking_assistant, input="Hi, I'd like to check my account balance." ) print(f"Response: {result.response.content}") if __name__ == "__main__": asyncio.run(main()) Practical Example: Banking Agents System Let's look at how we can use Azure OpenAI or APIM with the OpenAI Agents SDK to create a banking system with specialized agents and handoff capabilities. 1. Define Specialized Banking Agents We'll create several specialized agents: General Banking Assistant: Handles basic inquiries and account information Loan Specialist: Focuses on loan options and payment calculations Investment Specialist: Provides guidance on investment options Customer Service Agent: Routes inquiries to specialists 2. Implement Handoff Between Agents from agents import handoff, HandoffInputData from agents.extensions import handoff_filters # Define a filter for handoff messages def banking_handoff_message_filter(handoff_message_data: HandoffInputData) -> HandoffInputData: # Remove any tool-related messages from the message history handoff_message_data = handoff_filters.remove_all_tools(handoff_message_data) return handoff_message_data # Create customer service agent with handoffs customer_service_agent = Agent( name="Customer Service Agent", instructions="""You are a customer service agent at a bank. Help customers with general inquiries and direct them to specialists when needed. If the customer asks about loans or mortgages, handoff to the Loan Specialist. If the customer asks about investments or portfolio management, handoff to the Investment Specialist.""", handoffs=[ handoff(loan_specialist_agent, input_filter=banking_handoff_message_filter), handoff(investment_specialist_agent, input_filter=banking_handoff_message_filter), ], tools=[check_account_balance], ) 3. Trace the Conversation Flow from agents import trace async def main(): # Trace the entire run as a single workflow with trace(workflow_name="Banking Assistant Demo"): # Run the customer service agent result = await Runner.run( customer_service_agent, input="I'm interested in taking out a mortgage loan. Can you help me understand my options?" ) print(f"Response: {result.response.content}") if __name__ == "__main__": asyncio.run(main()) Benefits of Using Azure OpenAI/APIM with the OpenAI Agents SDK Integrating Azure OpenAI or APIM with the OpenAI Agents SDK offers several advantages: Enterprise-Grade Security: Azure provides robust security features, compliance certifications, and private networking options Scalability: Azure's infrastructure can handle high-volume production workloads Monitoring and Management: APIM provides additional monitoring, throttling, and API management capabilities Regional Deployment: Azure allows you to deploy models in specific regions to meet data residency requirements Cost Management: Azure provides detailed usage tracking and cost management tools Conclusion The OpenAI Agents SDK combined with Azure OpenAI Service or Azure API Management provides a powerful foundation for building intelligent, specialized AI assistants. By leveraging Azure's enterprise features and the OpenAI Agents SDK's capabilities, you can create robust, scalable, and secure AI applications for production environments. Whether you choose direct Azure OpenAI integration or Azure API Management depends on your specific needs for API management, security, and monitoring. Both approaches work seamlessly with the OpenAI Agents SDK, making it easy to build sophisticated agent-based applications. Azure OpenAI Service Azure APIM OpenAI Agents SDK AI Development Enterprise AIView the full article
  12. Generative AI is becoming increasingly prevalent in healthcare, and its significance is continuing to grow. Given the documentation-intensive nature of healthcare, generative AI presents an excellent opportunity to help alleviate this burden. However, to truly offset the clinician workload, it is crucial that content is checked for reliability and consistency before it is validated by a human. We are pleased to announce the private preview of our clinical conflict detection safeguard, available through our healthcare agent service. This safeguard helps users identify potential clinical conflicts within documentation content, regardless of whether it was generated by a human or AI. Identifying Clinical Conflicts: Seven Detected Categories Every conflict identified by the clinical conflict detection safeguard will indicate the conflict type and reference document content that constitutes the conflict so that the healthcare provider user can validate and take appropriate actions. Opposition conflicts: Normal vs abnormal findings of the same body structure E.g. Left breast: Unremarkable <> The left breast demonstrates persistent circumscribed masses. Negative vs positive statements about the same clinical entity E.g. No cardiopulmonary disease <> Bibasilar atelectasis Lab/vital sign interpretation vs condition E.g. Low blood sugar level at admission <> Patient was admitted with hyperglycemia Opposite disorders/symptoms E.g. Hypernatremia <> Hyponatremia Sex information opposites E.g. Female patient comes in with ... <> Testis: Unremarkable Anatomical conflicts: Absent vs present body structures E.g. Cholelithiasis <> The gallbladder is absent History of removal procedure vs. present body structure E.g. Bilat Mastectomy (2010) <> Left breast: solid mass Conducted imaging study versus clinical finding of body structure E.g. Procedure: Chest XR <> Brain lesion Laterality mismatch of same clinical finding E.g. Results: Stable ductal carcinoma of left breast. <> A&P: Stage 0 stable ductal carcinoma of right breast. Value Conflicts: Condition vs. lab / vital sign / measurement E.g. Hypoglycemia <> Blood Gluc 145 Conflicting lab measurement on same timestamp E.g. 02/11/2022 WBC-8.0 <> 02/11/2022 WBC-5.5 Contraindication conflicts: Medication/substance allergy vs. prescribed medication E.g. He is allergic to acetaminophen. <> Home medication include Tylenol, ... Comparison conflicts: Increased/decreased statements vs. opposite measurements E.g. Ultrasound shows a 3 cm lesion in the bladder wall, previously 4 cm, an increase in size. Descriptive conflict: Positive vs unlikely statements of same condition E.g. Lungs: Pleural effusion is unlikely <> Assessment: Pleural effusion Conflicting characteristics of same condition E.g. Results: Stable small pleural effusion <> Impression: Small pleural effusion Multiple versus Single statement of same condition E.g. Findings: 9 mm lesion of upper pole right kidney <> Assessment: Right renal lesions Metadata conflicts: Age information in provided metadata vs documentation E.g. Date of Birth = “04-08-1990” Date of Service=”11-25-2024" <> A 42-year-old female presents for evaluation of pneumonia. Sex information in provided metadata vs documentation * E.g. Date of Service=”11-25-2024" Sex= “female” <> Finding: Prostate is enlarged A closer look Consider the following radiology report snippet: Exam: CT of the abdomen and pelvis Clinical history: LLQ pain x 10 days, cholecystectomy 6 weeks ago Findings: - New calcified densities are seen in the nondistended gallbladder. - Heterogeneous enhancement of the liver with periportal edema. No suspicious hepatic masses are identified. Portal veins are patent. - Gastrointestinal Tract: No abnormal dilation or wall thickening. Diverticulosis. - Kidneys are normal in size. The patient comes in post cholecystectomy for a CT of abdomen/pelvis. We can create a simple request to the clinical conflict detection safeguards like this: { "input_document":{ "document_id": "1", "document_text": "Exam: CT of the abdomen and pelvis\nClinical history: LLQ pain x 10 days, cholecystectomy 6 weeks ago\nFindings:\n- New calcified densities are seen in the nondistended gallbladder.\n- Heterogeneous enhancement of the liver with periportal edema. No suspicious hepatic masses are identified. Portal veins are patent.\n- Gastrointestinal Tract: No abnormal dilation or wall thickening. Diverticulosis.\n- Kidneys are normal in size.", "document_metadata":{ "document_type":"CLINICAL_REPORT", "date_of_service": "2024-10-10", "locale": "en-us" } }, "patient_metadata":{ "date_of_birth": "1944-01-01", "date_of_admission": "2024-10-10", "biological_sex": "FEMALE", "patient_id": "3" }, "request_id": "1" } The request provides the metadata for document text to allow for potential metadata conflict detections. The clinical conflict detection safeguard considers the document text together with the metadata and returns the following response: { "inferences": [ { "type": "ANATOMICAL_CONFLICT", "confidence_score": 1, "output_token": { "offsets": [ { "document_id": "1", "begin": 73, "end": 88 } ] }, "reference_token": { "offsets": [ { "document_id": "1", "begin": 153, "end": 165 }, { "document_id": "1", "begin": 166, "end": 177 } ] } } ], "status": "SUCCESS", "model_version": "1" } The safeguard picks up an anatomical conflict in the document text and provides text references using the offsets that make up the clinical conflict. In this case, it picks up an anatomical conflict between “cholecystectomy” (which means a gallbladder removal) and the finding of “New calcified densities are seen in the nondistended gallbladder”. The new densities in the gallbladder conflict with the statement that the gallbladder was removed 6 weeks prior. In practice The clinical conflicts detected by the safeguard can be leveraged in various stages of any report generation solution to build trust in its clinical consistency. Imagine a report generation application calling the clinical conflict detection safeguards to highlight potential inconsistencies to the HCP end user — as illustrated below — for review before signing off on the report. There are multiple conflicts in the example above, but the highlight shows inconsistently generated documentation. The normal statement about the lungs contradicts “small nodules in the left lung” findings, so the “Lungs are unremarkable” statement should have been removed. How to use ​ To use the clinical safeguards API, users must provision a healthcare agent service resource in your Azure subscription.​ When creating the healthcare agent service, make sure to set the plan to “Agent (C1)”.​ Once created, please fill out the form here. * This clinical safeguard does not define criteria for determining or identifying biological sex. Sex mismatch is based on the information in the metadata and the medical note. Please remember that neither Clinical Conflict Detection nor Health Agent Service are made available, designed, intended or licensed to be used (1) as a medical device, (2) in the diagnosis, cure, mitigation, monitoring, treatment or prevention of a disease, condition or illness or as a substitute for professional medical advice. The use of these products are subject to the Microsoft Product Terms and other licensing agreements and to the Medical Device Disclaimer and documentation available here. View the full article
  13. Introduction This is the second post for RAG Time, a 7-part educational series on retrieval-augmented generation (RAG). Read the first post of this series and access all videos and resources in our Github repo. Journey 2 covers indexing and retrieval techniques for RAG: Data ingestion approaches: use Azure AI Search to upload, extract, and process documents using Azure Blob Storage, Document Intelligence, and integrated vectorization. Keyword and vector search: compare traditional keyword matching with vector search Hybrid search: how to apply keyword and vector search techniques with Reciprocal Rank Fusion (RRF) for better quality results across more use cases. Semantic ranker and query rewriting: See how reordering results using semantic scoring and enhancing queries through rewriting can dramatically improve relevance. Data Pipeline What is data ingestion? When building a RAG framework, the first step is getting your data into the retrieval system and processed so that it’s primed for the LLM to understand. The following sections cover the fundamentals of data ingestion. A future RAG Time post will cover more advanced topics in data ingestion. Integrated Vectorization Azure AI Search offers integrated vectorization, a built-in feature. It automatically converts your ingested text (or even images) into vectors by leveraging advanced models like OpenAI’s text-embedding-3-large—or even custom models you might have. This real-time transformation means that every document and every segment of it is instantly prepared for semantic analysis, with the entire process seamlessly tied into your ingestion pipeline. No manual intervention is required, which means fewer bottlenecks and a more streamlined workflow. Parsing documents The first step of the data ingestion process involves uploading your documents from various sources—whether that’s Azure Blob Storage, Azure Data Lake Storage Gen2 or OneLake. Once the data is in the cloud, services such as Azure Document Intelligence and Azure Content Understanding step in to extract all the useful information: text, tables, structural details, and even images embedded in your PDFs, Office documents, JSON files, and more. In addition, Azure AI Search automatically supports change tracking so you can rest assured your documents remain up to date without any extra effort. Chunking Documents A critical component in integrated vectorization is chunking. Most language models have a limited context window, which means feeding in too much unstructured text can dilute the quality of your results. By splitting larger documents into smaller, manageable chunks based on sentence boundaries or token counts—while intelligently allowing overlaps to preserve context—you ensure that key details aren’t lost. Overlapping can be especially important for maintaining the continuity of thought, such as preserving table headers or the transition between paragraphs, which in turn boosts retrieval accuracy and improves overall performance. Using integrated vectorization, you lay a solid foundation for a highly effective RAG system that not only understands your data but leverages it to deliver precise, context-rich search results Retrieval Strategies Here are some common, foundational search strategies used in retrieval systems. Keyword Search Traditional keyword search is the foundation of many search systems. This method works by creating an inverted index—a mapping of each term in a document to the documents where it appears. For instance, imagine you have a collection of documents about fruits. A simple keyword search might count the occurrences of words like “apple,” “orange,” or “banana” to determine the relevance of each document. This approach is particularly effective when you need literal matches, such as pinpointing a flight number or a specific code where precision is crucial. Even as newer search technologies emerge, keyword search remains a robust baseline. It efficiently matches exact terms found in text, ensuring that when specific information is needed, the results are both fast and accurate. Vector Search While keyword search provides exact matches, it may not capture the full context or nuanced meanings behind a query. This is where vector search shines. In vector search, both queries and document chunks are transformed into high-dimensional embeddings using advanced models like OpenAI’s text-embedding-3-large. These embeddings capture the semantic essence of words and phrases in multi-dimensional vectors. Once everything is converted into vectors, the system performs a k-nearest neighbor search using cosine similarity. This method allows the search engine to find documents that are contextually similar—even if they don’t share exact keywords. For example, demo code in our system showed that a query like “what is Contoso?” not only returned literal matches but also contextually related documents, demonstrating a deep semantic understanding of the subject. In summary, combining keyword search with vector search in your RAG system leverages the precision of text-based matching with the nuanced insight of semantic search. This dual approach ensures that users receive both exact answers and optionally related information that enhances the overall retrieval experience. Hybrid Search Hybrid search is a powerful method that blends the precision of keyword search with the nuanced, context-aware capabilities of vector search. Hybrid search leverages the strengths of both strategies. On one hand, keyword search excels at delivering exact matches, which is critical when you're looking for precise information like flight numbers, product codes, or specific numerical data. On the other hand, vector search digs deeper by transforming your queries and documents into embeddings, allowing the system to understand and interpret the underlying semantics of the content. By combining these two, hybrid search ensures that both literal and contextually similar results are retrieved. Reciprocal Rank Fusion (RRF) is a technique used to merge the results from both keyword and vector searches into one cohesive set. Essentially, it reorders and integrates the result lists from each method, amplifying the highest quality matches from both sides. The outcome is a ranked list where the most relevant document chunks are prioritized. By incorporating hybrid search into your retrieval system, you get the best of both worlds: the precision of keyword matching alongside the semantic depth of vector search, all working together to deliver an optimal search experience. Reranking Reranking is a post-retrieval step. Reranking uses a reasoning model to sort and prioritize the most relevant retrieved documents first. Semantic ranker in Azure AI Search uses a cross-encoder model to re-score every document retrieved on a normalized scale from 0 to 4. This score reflects how well the document semantically matches the query. You can use this score to establish a minimum threshold to filter out low-quality or “noisy” documents, ensuring that only the best passages are sent along for further processing. This re-ranking model is trained on data commonly seen in RAG applications, across multiple industries, languages and data types. Query transformations Sometimes, a user’s original query might be imprecise or too narrow, which can lead to relevant content being missed. Pre-retrieval, you can transform, augment or modify the search query to improve recall. Query rewriting in Azure AI Search is a pre-retrieval feature that transforms the initial search query into alternative expressions. For example, a question like "What underwater activities can I do in the Bahamas?" might be rephrased as "water sports available in the Bahamas" or "snorkeling and diving in the Bahamas." This expansion creates additional candidate queries that help surface documents that may have been overlooked by the original wording. By optimizing across the entire query pipeline, not just the retrieval phase, you have more tools to deliver more relevant information to the language model. Azure AI Search makes it possible to fine-tune the retrieval process, filtering out noise and capturing a wider range of relevant content—even when the initial query isn’t perfect. Continue your RAG Journey: Wrapping Up & Looking Ahead Let’s take a moment to recap the journey you’ve embarked on today. We started with the fundamentals of data ingestion, where you learned how to use integrated vectorization to extract valuable information. Next, we moved into search strategies by comparing keyword search—which offers structured, literal matching ideal for precise codes or flight details—with the more dynamic vector search that captures the subtle nuances of language through semantic matching. Combining these methods with hybrid search, and using Reciprocal Rank Fusion to merge results, provided a balanced approach: the best of both worlds in one robust retrieval system. To further refine your results, we looked at the semantic ranker—a tool that re-scores and reorders documents based on their semantic fit with your query—and query rewriting, which transforms your original search ideas into alternative formulations to catch every potential match. These enhancements ensure that your overall pipeline isn’t just comprehensive; it’s designed to deliver only the most relevant, high-quality content. Now that you’ve seen how each component of this pipeline works together to create a state-of-the-art RAG system, it’s time to take the next step in your journey. Explore our repository for full code samples and detailed documentation. And don’t miss out on future RAG Time sessions, where we continue to share the latest best practices and innovations in retrieval augmented generation. Getting started with RAG on Azure AI Search has never been simpler, and your journey toward building even more effective retrieval systems is just beginning. Embrace the next chapter and continue to innovate! Next Steps Ready to explore further? Check out these resources, which can all be found in our centralized GitHub repo: Watch Journey 2 RAG Time GitHub Repo (Hands-on notebooks, documentation, and detailed guides to kick-start your RAG journey) Azure AI Search Documentation Azure AI Foundry Have questions, thoughts, or want to share how you’re using RAG in your projects? Drop us a comment below or open a discussion in our GitHub repo. Your feedback shapes our future content! View the full article
  14. In today’s digital landscape, SaaS and OAuth applications have revolutionized the way we work, collaborate, and innovate. However, they also introduce significant risks related to security, privacy and compliance. As the SaaS landscape grows, IT leaders must balance enabling productivity with managing risk. A key to managing risk is automated tools that provide real-time context and remediation capabilities to help Security Operations Center (SOC) teams outpace sophisticated attackers and limit lateral movement and damage. The Rise of OAuth App Attacks Over the past two years, there has been a significant increase in OAuth app attacks. Employees often create app-to-app connections without considering security risks. With just one click granting permissions, new apps can read and write emails, set rules, and gain authorization to perform nearly any action. These overprivileged apps are more at risk for compromise, and Microsoft internal research shows that 1 in 3 OAuth apps are overprivileged. 1 A common attack involves using phishing to compromise a user account, then creating a malicious OAuth app with elevated privileges or hijacking an existing OAuth app and manipulating it for malicious use. Once threat actors gain persistence in the environment, they can also deploy virtual machines or run spam campaigns resulting in data breaches, financial and reputational losses. Automatic Attack Disruption Microsoft’s Automatic attack disruption capabilities disrupt sophisticated in-progress attacks and prevent them from spreading, now including OAuth app-based attacks. Attack disruption is an automated response capability that stops in-progress attacks by analyzing the attacker’s intent, identifying compromised assets, and containing them in real time. This built-in, self-defense capability uses the correlated signals in XDR, the latest threat intelligence, and AI and machine learning backed models to accurately predict the attack path used and block an attacker’s next move before it happens with above 99% confidence. This includes response actions such as containing devices, disabling user accounts, or disabling malicious OAuth apps. The benefits of attack disruption include: Speed of response: attack disruption can disrupt attacks like ransomware in an average time of 3 minutes Reduced Impact of Attacks: by minimizing the time attackers have to cause damage, attack disruption limits the lateral movement of threat actors within your network, reducing the overall impact of the threat. This means less downtime, fewer compromised systems, and lower recovery costs. Enhanced Security Operations: attack disruption allows security operations teams to focus on investigating and remediating other potential threats, improving their efficiency and overall effectiveness. Real-World Attacks Microsoft Threat Intelligence has noted a significant increase in OAuth app attacks over the past two years. In most cases a compromised user provides the attacker initial access, while the malicious activities and persistence are carried out using OAuth applications. Here’s a real-world example of an OAuth phishing campaign that we’ve seen across many customers’ environments. Previous methods to resolve this type of attack would have taken hours for SOC teams to manually hunt and resolve. Initial Access: A user received an email that looks legitimate but contains a phishing link that redirects to an adversary-in-the-middle (AiTM) phishing kit. Figure 1. An example of an AiTM controlled proxy that impersonates a login page to steal credentials. Credential Access: When the user clicks on that link, they are redirected to an AiTM controlled proxy that impersonates a login page to steal the user credentials and an access token which grants the attacker the ability to create or modify OAuth apps. Persistence and Defense Evasion: The attacker created multiple ma malicious OAuth apps across various tenants which grants read and write access to the user’s e-mail, files and other resources. Next the attacker created an inbox forwarding rule to exfiltrate emails. An additional rule was created to empty the sent box, thus deleting any evidence that the user was compromised. Most organizations are completely blind-sighted when this happens. Automatic Attack Disruption: Defender XDR gains insights from many different sources including endpoints, identities, email, collaboration tools, and SaaS apps and correlates the signals into a single, high-confidence incident. In this attack, XDR identifies assets controlled by the attacker and it automatically takes response actions across relevant Microsoft Defender products disable affected assets and stop the attack in real-time. SOC Remediation: After the risk is mitigated, Microsoft Defender admins can manually unlock the users that had been automatically locked by the attack disruption response. The ability to manually unlock users is available from the Microsoft Defender action center, and only for users that were locked by attack disruption. Figure 2. Timeline to disrupt an OAuth attack comparing manual intervention vs. automatic attack disruption. Enhanced Security with Microsoft Defender for Cloud Apps Microsoft Defender for Cloud Apps enables the necessary integration and monitoring capabilities required to detect and disrupt malicious OAuth applications. To ensure SOC teams have full control, they can configure automatic attack disruption and easily revert any action from the security portal. Figure 3. An example of a contained malicious OAuth application, with attack disruption tag Conclusion Microsoft Defender XDR's automatic disruption capability leverages AI and machine learning for real-time threat mitigation and enhanced security operations. Want to learn more about how Defender for Cloud Apps can help you manage OAuth attacks and SaaS-based threats? Dive into our resources for a deeper conversation. Get started now. Get started Make sure your organization fulfils the Microsoft Defender pre-requisites (Mandatory). Connect “Microsoft 365 connector” in Microsoft Defender for Cloud Apps (Mandatory).  Check out our documentation to learn more about Microsoft 365 Defender attack disruption prerequisites, available controls, and indications. Learn more about other scenarios supported by automatic attack disruption Not a customer, yet? Start a free trial today. 1Microsoft Internal Research, May 2024, N=502 View the full article
  15. Dear Microsoft 365 Developer Team, I would like to submit a feature request regarding custom menus in Word JavaScript Add-ins. Currently, when defining custom menus for the ribbon via the manifest.xml, it is possible to create a root-level menu control with a list of menu items. However, submenus (nested menus) are not supported. This limits the ability to create well-structured and user-friendly menus, especially when dealing with more complex add-ins that require logical grouping of actions. Use Case Example: 
Imagine an add-in that handles document templates, formatting options, and insertion of custom content. It would be much more intuitive to organize these into hierarchical menus like: My Add-in Menu | |---Templates | |---Contract Template | |---NDA Template |---Formatting | |---Apply Header | |---Apply Footer |---Insert |---Clause |---Placeholder Currently, to achieve something like this, we either have to create long flat menus, which are less user-friendly and harder to navigate, or define multiple root-level menu controls as a workaround. However, having too many root-level menus clutters the ribbon and makes the overall user experience confusing and less efficient. Feature Request:
 Please consider adding support for nested menu structures (submenus) in Office Add-in command definitions. This would: Greatly improve user experience for complex add-ins.Allow better organization of actions and commands.Align the Add-in UX closer to the native ribbon and menu experiences in Office apps.Possible Implementation Suggestions: Extend the Menu control to allow nested Menu or MenuItem elements.Allow referencing predefined menus to enable reuse and modularity.Related Documentation: Office Add-ins XML manifestAdd-In Commands OverviewControl Element of Type MenuThank you for considering this enhancement. It would be a huge step forward for creating more powerful and user-friendly Office Add-ins! Best regards, 
Ingo View the full article
  16. I'm trying to use PowerAutomate's Send a Microsoft Graph HTTP request to get webinar registrant details. The end goal is to send registrant details into make.com as part of a wider automation, but I simply cannot get the GET request to work. I want to know if it's actually possible to do - thanks for your patience, I feel a little out of my depth... View the full article
  17. Created Microsoft Graph Connector for SQL on premises database via MS Graph Connector agent. Configured it on m365 admin portal. During Graph Connector configuration, preview shows the SQL table and data. Data indexing also completes successfully. Created Copilot agent in m365 and added the aforementioned Graph Connector as knowledge source. However, the Copilot agent is not able to answer questions about SQL data. It does not seem to have access to the search. View the full article
  18. Hello, all. I am about to submit a first version of an app to MS team evaluation before it goes to MS Store and one of the steps is to fill out a tax profile. While filling it in my partner center page, which is a dev account registered out of the US, it comes to a point where it wants a W-8BEN filled even after informing in at least 2 occasions in the same process that I am not based in the US, I am not US citizen and I don't pay tax in US. Why does it still want a W-8BEN given the above explanation, and considering that any future revenue is not made in the US? Any explanation will be appreciated. View the full article
  19. We are using Microsoft Calling Plan and have assigned 10-digit numbers to staff. One admin assistant is handling incoming (external) calls for their supervisor who does not want the PSTN calls ringing him directly in Teams. But with forwarding on, we're noticing ALL calls - including web and video, and including internal calls - are being forwarded. While this is useful for some scenarios, traditional delegation would call for distinguishing incoming external PSTN calls. What "clean" options do we have to forward the external PSTN calls to the assistant? I feel like this should already be an explicit setting in the Teams Call settings... Ideally the supervisor would retain the line for outbound and dial by name. I'm guessing I could create (yet another) call queue and assign both people, then opt out the supervisor (who does not want the incoming PSTN calls ringing him directly in Teams). But that means we lose outbound calls unless I assign a separate line to the supervisor and turn on an outbound caller ID policy that maps to the (now shared) line. Thanks! View the full article
  20. I have a large number of tables to backfill and I was hoping to automate this process with a package parameter instead of creating a dataflow for each table (see screenshot below). Package parameter like 'tbl_a, tbl_b, tbl_c'Followed by an Execute Script that splits this string into an Array.The array would then be enumated in a Foreach loop container. 3a. SQL command from enumerated variable would be fed into an OLE DB Source SQL Command 'Select * from tbl_a', 'Select * from tbl_b', etc. 3b. Table name or view name variable in OLE Destination would be given different table names 'tbl_a' , 'tbl_bl', tbl_c' through the enumerated variables Where it fails is when tbl_a, tbl_b is enumarated into OLE DB Destination Error: 0xC020201B at Data Flow Task, OLE DB Destination [2]: The number of input columns for OLE DB Destination.Inputs[OLE DB Destination Input] cannot be zero. Error: 0xC004706B at Data Flow Task, SSIS.Pipeline: "OLE DB Destination" failed validation and returned validation status "VS_ISBROKEN". Is there a way to parameterize OLE DB Destination? Is there a way to avoid need column mappings or somehow query all the columns from the OLE DB Source to feed into OLE DB Destination? Each table is different and mapping the columns for one backfill would completely blow up the next backfill in the batch. Is there another Task I can use in place of OLE DB Destination that can be preceded by OLE DB Source? Thanks in advance View the full article
  21. This blog series is designed to help you skill up on Microsoft 365 Copilot.. We hope you will make this your go-to source for the latest updates, resources, and opportunities in technical skill building for Microsoft 365 Copilot. New Microsoft 365 Copilot training for business users: On-demand training introducing business users to Copilot. Great for users new to Copilot! Work smarter with AI - Training | Microsoft Learn Get more done and unleash your creativity with Microsoft Copilot. In this learning path, you'll explore how to use Microsoft Copilot or Microsoft 365 Copilot to help you research, find information, and generate effective content. On-demand training for business users looking to improve productivity in the apps they use every day: Draft, analyze, and present with Microsoft 365 Copilot - Training | Microsoft Learn This Learning Path directs users to learn common prompt flows in Microsoft 365 apps including PowerPoint, Word, Excel, Teams, and Outlook. It also introduces Microsoft 365 Copilot Chat and discusses the difference between work and web grounded data. On-demand training for business users that want to get started with AI-powered agents: Transform your everyday business processes with no-code agents – Training | Microsoft Learn This Learning Path examines no-code agents in Microsoft 365 Copilot Chat and SharePoint and explores how business users can create, manage, and use agents as their own AI-powered assistant. New resources to accelerate your journey with Microsoft 365 Copilot Chat and agents to transform business processes: Copilot Chat and agent starter kit To support the announcement of Microsoft 365 Copilot Chat, we have updated the Copilot Success Kit and Copilot Success Kit for small and medium-sized businesses, which now includes a new agent starter kit with guidance and easy ways for your organization to get started with Copilot Chat and agentic functionality. You can find the latest assets and resources here to start your journey. The Copilot Chat and Agent Starter Kit has a comprehensive set of guidance for both IT and end-users. Agent overview guide Learn how to quickly unlock the value of Copilot through agents. See the easiest ways to get started across Copilot Chat, SharePoint, and Microsoft Copilot Studio with lots of examples and templates that will help you quickly build and use your first agents. IT Set up and Controls Guide Get the latest on IT set up and controls guidance for Copilot Chat and agents. Manage access to Copilot Chat for your users and set up required data governance controls. Then set up access to agents including licensing and billing plans. Learn how to monitor and manage consumption. Latest Agent Blogs Catch up on the latest announcement from Satya Nadella and Jared Spataro announcing Copilot Chat here Understand how pricing for Microsoft 365 Copilot Chat will work and what new capabilities we are announcing in Copilot Studio, to support it here Copilot Chat and agents user resources Share the Copilot Chat user training deck with users at your organization to introduce Copilot Chat and guide them on how to use it effectively For dedicated guidance on using and creating agents, share the Agents in Copilot Chat handout. Copilot Chat scenarios We have launched new Copilot Chat (free) and agent (consumption) scenarios to the Scenario Library, with easy steps for each of your functional teams to get started. Microsoft 365 Copilot AMA event for IT administrators (recap) On-demand AMA on tools and techniques for preparing your data for Copilot Prepare your data for Copilot: Essential tools and techniques Learn how to address oversharing, integrate SharePoint Advanced Management, and utilize Microsoft Purview for secure and compliant data handling. Get practical guidance to ensure your data is ready for Copilot deployment, including insights from our Microsoft 365 Copilot deployment blueprint On-demand AMA on how data flows through Microsoft 365 Copilot Follow the prompt: How data flows through Microsoft 365 Copilot Explore how Microsoft processes and protects your data with Microsoft 365 Copilot. Focus on enterprise data protection, responsible AI services, and orchestration in managing prompts. Learn about tools to prevent data loss and oversharing, and how Microsoft Graph Connectors and agents integrate external data sources to enhance Copilot skills and knowledge. Join us at the Microsoft 365 Community Conference in Las Vegas, May 6-8 The Microsoft 365 Community Conference is your chance to keep up with AI, build game-changing skills, and take your career (and business) even further. With over 200 sessions, workshops, keynotes, and AMAs, you’ll learn directly from the experts and product-makers who are reimagining what’s possible in the workplace. Here’s what you can expect: Meet one-on-one with the people who create Microsoft products—ask questions, share feedback, and discover real-world solutions Explore Microsoft’s latest product updates and learn about what’s on the horizon Build and sharpen skills you can use immediately to be more productive, creative, and collaborative with the Microsoft tools you use every day Grow your network, dive deep, and have fun with the best community in tech How to Register: Buy tickets today and get ready to transform the way you work. Save $150 with our exclusive customer code SAVE150. View the full article
  22. What does this message mean on Microsoft booking, I've been trying to see my booking registry but i keep seeing this message, sometimes the massage varies when i try to access the app, but often it just appears like the following: Detailed message displayed: UTC Date: 2025-03-11T14:14:30.266Z Client Id: F6F199305D9541EDA2A53EE57C90EDB8 Session Id: 8009806d-b6f3-40b9-9ddd-0ef7ba2b900f Client Version: 20250310070 BootResult: network Back Filled Errors: Unhandled Rejection: Error: [object Object]:undefined|Unhandled Rejection: Error: [object Object]:undefined|undefined:undefined|undefined:undefined err: Error: [object Object] esrc: StartupData et: ServerError estack: Error: [object Object] at https://res.df.onecdn.static.microsoft/owamail/hashed-v1/scripts/owa.47106.6187e72c.js:1:71227 at async S (https://res.df.onecdn.static.microsoft/owamail/hashed-v1/scripts/owa.10245.3a6876a0.js:1:3108) at async https://res.df.onecdn.static.microsoft/owamail/hashed-v1/scripts/owa.10245.3a6876a0.js:1:2460 at async l (https://res.df.onecdn.static.microsoft/owamail/hashed-v1/scripts/owa.bookingsindexv2.b6602698.js:1:97492) at async w (https://res.df.onecdn.static.microsoft/owamail/hashed-v1/scripts/owa.bookingsindexv2.b6602698.js:1:100345) View the full article
  23. What is Network Security Perimeter? The Network Security Perimeter is a feature designed to enhance the security of Azure PaaS resources by creating a logical network isolation boundary. This allows Azure PaaS resources to communicate within an explicit trusted boundary, ensuring that external access is limited based on network controls defined across all Private Link Resources within the perimeter. Azure Monitor - Network Security Perimeter - Public Cloud Region - Update We are pleased to announce the expansion of Network Security Perimeter features in Azure Monitor services from 6 to 56 Azure regions. This significant milestone enables us to reach a broader audience and serve a larger customer base. It underscores our continuous growth and dedication to meeting the security needs of our global customers. The Network Security Perimeter feature, now available in these additional regions, is designed to enhance the security and monitoring capabilities of our customers' networks. By utilizing our solution, customers can achieve a more secure and isolated network environment, which is crucial in today's dynamic threat landscape. Currently, NSP is in Public Preview with Azure Global customers and e have expanded Azure Monitor region support for NSP from 6 regions to 56 regions. The region rollout has enabled our customers to meet their network isolation and monitoring requirements for implementing the Secure Future Initiative (SFI) security waves. Azure Monitor - Network Security Perimeter Configuration Key Benefits to Azure Customers The Network Security Perimeter (NSP) provides several key benefits for securing and managing Azure PaaS resources: Enhances security by allowing communication within a trusted boundary and limiting external access based on network controls. Provides centralized management, enabling administrators to define network boundaries and configure access controls through a uniform API in Azure Core Network. Offers granular access control with inbound and outbound rules based on IP addresses, subscriptions, or domain names. Includes logging and monitoring capabilities for visibility into traffic patterns, aiding in auditing, compliance, and threat identification. Integrates seamlessly with other Azure services and supports complex network setups by associating multiple Private Link Resources with a single perimeter. These characteristics highlight NSP as an excellent instrument for enhancing network security and ensuring data integrity based on the network isolation configuration. Have a Question / Any Feedback? Reach us at AzMon-NSP-Scrum@microsoft.com View the full article
  24. Hi, Insiders! Considering writing your first novel or a children’s book? Microsoft Copilot can help you get started or get unstuck during the story development phase of the project, or help you pick the perfect title for your masterpiece. If you need help with the characters Let’s say you have an idea for a story and main character, but are struggling to decide on its name, background, or personality. Share what you’re thinking of with Copilot and ask for some suggestions! Sample prompt I’m writing a children’s book about a pencil living among pens and learning how to fit in while also embracing its uniqueness. Can you come up with 2-3 relatable name ideas for the main character pencil? Also, generate 2-3 punny ideas for the name of the pen town. Copilot’s response If you need help with the plot Maybe your character is clear, but you’re not sure how to drive the plot forward. Or, maybe you’re staring at a blank page and need some help simply “putting pen to paper.” Copilot can take even the roughest idea and give you some helpful suggestions for turning it into something that will spur you on. Sample prompt I want to create a story for adults about marriage in your late 60s. I want it to feel realistic and give useful advice. The story is about fictional characters Gillian and Robert, who met on a dating app after their children told them to get back out there. Gillian’s husband passed away a few months prior, and Robert is divorced from his high school sweetheart. Can you suggest 1-3 plot points the book could cover that relate to their situation and what someone in their 60s might encounter on a dating app or in the dating scene? Copilot’s response If you need help with a copy issue So your characters and plot are clear – fantastic! Copilot can still be of assistance when you’re struggling to put into words a quote, scene, phrase, or paragraph. Give it your rough draft and see how it tweaks and refines it. Sample prompt I’m writing a scene for a short personal essay about when I visited the Grand Canyon for the first time. I wasn’t just struck by its beauty, but it made me almost terrified of how insignificant we can be in the grand scheme of life. I mentioned this to my father, whom I was traveling with, and he reminded me of how we all make small impacts on the world every second of every day. Can you write a short dialog to showcase this conversation? Copilot’s response Tips and tricks As you draft your own prompts throughout your book ideation and writing process, keep these tips in mind to make Copilot’s responses as effective as possible: Be specific: Instead of asking, “Give me some nonfiction book ideas,” you could ask, “What are 3-5 book ideas for a story for teenagers about entering high school?” Provide context: Copilot can tailor its responses to the type of writing or style you want to emulate: “Give me 2-3 plot points for a novel about skiing that’s both serious about the sport and lighthearted in tone.” Ask clear questions: Instead of a broad question like, “What should I write about?” try, “What are some long-form essays I could write as a 23-year-old single man living in Europe?” Break down complex requests: If you have a multi-part question, break it into smaller parts: “First, can you provide a title and outline for a cookbook about cooking with children? Then, suggest 3-5 recipes I should include.” Specify desired format: If you need a list, summary, or detailed explanation, mention that: “Can you provide a list of 5 books or articles I should read if I want to write my own book of poems?” Indicate your preferences: Let Copilot know if you have a preference for the type of information or tone. “Can you write a dialog between a worm and an apple that’s funny and uses Gen Z lingo?” Provide examples: If you’re looking for creative ideas, give an example of what you like. “I need a story idea inspired by ‘Harold and the Purple Crayon.’” Ask follow-up questions: If Copilot’s initial response isn’t quite what you need, ask a follow-up question to narrow down the information: “Can you give more details on the side character Bill who lives in a teapot?” Be patient and iterative: Sometimes it takes a few tries to get the perfect response. Feel free to refine your prompt based on the initial answers you receive. We can’t wait to read what you come up with! Learn about the Microsoft 365 Insider program and sign up for the Microsoft 365 Insider newsletter to get the latest information about Insider features in your inbox once a month! View the full article
  25. Azure Kubernetes Service (AKS) now offers free platform metrics for monitoring your control plane components. This enhancement provides essential insights into the availability and performance of managed control plane components, such as the API server and etcd. In this blog post, we'll explore these new metrics and demonstrate how to leverage them to ensure the health and performance of your AKS clusters. What's New? Previously, detailed control plane metrics were only available through the paid Azure Managed Prometheus feature. Now, these metrics are automatically collected for free for all AKS clusters and are available for creating metric alerts. This democratizes access to critical monitoring data and helps all AKS users maintain more reliable Kubernetes environments. Available Control Plane Metrics The following platform metrics are now available for your AKS clusters: NameDisplay NameDescriptionapiserver_memory_usage_percentageAPI Server (PREVIEW) Memory Usage PercentageMaximum memory percentage (based off current limit) used by API server pod across instancesapiserver_cpu_usage_percentageAPI Server (PREVIEW) CPU Usage PercentageMaximum CPU percentage (based off current limit) used by API server pod across instancesetcd_memory_usage_percentageETCD (PREVIEW) Memory Usage PercentageMaximum memory percentage (based off current limit) used by ETCD pod across instancesetcd_cpu_usage_percentageETCD (PREVIEW) CPU Usage PercentageMaximum ETCD percentage (based off current limit) used by ETCD pod across instancesetcd_database_usage_percentageETCD (PREVIEW) Database Usage PercentageMaximum utilization of the ETCD database across instances Accessing the New Platform Metrics The metrics are automatically collected and available in the Azure Monitor Metrics explorer. Here's how to access them: Navigate to your AKS cluster in the Azure portal Select "Metrics" from the monitoring section In the Metric namespace dropdown, choose the Metric Namespace as Container Service and Metric as any of the metrics mentioned above e.g. API Server Memory Utilization. You can also choose your desired aggregation (between Avg or Max) and timeframe. You'll now see the control plane metrics available for selection: These metrics can also be retrieved through the platform metrics API or exported to other destinations. Understanding Key Control Plane Metrics API Server Memory Usage Percentage The API server is the front-end for the Kubernetes control plane, processing all requests to the cluster. Monitoring its memory usage is critical because: High memory usage can lead to API server instability and potential outages Memory pressure may cause request latency or timeouts Sustained high memory usage indicates potential scaling issues A healthy API server typically maintains memory usage below 80%. Values consistently above this threshold warrant investigation and potential remediation. To investigate further into the issue, follow the guide here. etcd Database Usage Percentage etcd serves as the persistent storage for all Kubernetes cluster data. The etcd_database_usage_percentage metric is particularly important because: etcd performance dramatically degrades as database usage approaches capacity High database utilization can lead to increased latency for all cluster operations Database size impacts backup and restore operations Best practices suggest keeping etcd database usage below 2GB (absolute usage) to ensure optimal performance. When usage exceeds this threshold, you can clean up unnecessary resources, reduce watch operations, and implement resource quota and limits. The Diagnose and Solve experience in Azure Portal has detailed insights on the cause of the etcd database saturation. To investigate this issue further, follow the guide here. Setting Up Alerts for Control Plane Metrics To proactively monitor your control plane, you can set up metric alerts: Navigate to your AKS cluster in the Azure Portal Select "Alerts" from the monitoring section Click on "Create" and select "Alert Rule" Select your subscription, resource group, and resource type "Kubernetes service" in the Scope (selected by default) and click on See all signals in Conditions Configure signal logic: Select one of the control plane metrics (e.g., "API Server Memory Usage Percentage") Set the condition (e.g., "Greater than") Define the threshold (e.g., 80%) Specify the evaluation frequency and window Define actions to take when the alert triggers Name and save your alert rule Example Alert Configurations API Server Memory Alert: Signal: apiserver_memory_usage_percentage Operator: Greater than Threshold: 80% Window: 5 minutes Frequency: 1 minute Severity: 2 (Warning) ETCD Database Usage Alert: Signal: etcd_database_usage_percentage Operator: Greater than Threshold: 75% Window: 15 minutes Frequency: 5 minutes Severity: 2 (Warning) You can also create alerts through CLI, PowerShell or ARM templates Conclusion The introduction of free Azure platform metrics for AKS control plane components represents a enhancement to the monitoring capabilities available to all AKS users. By leveraging these metrics, particularly API server memory usage and etcd database usage percentages, you can ensure the reliability and performance of your Kubernetes environments without additional cost. Start using these metrics today to gain deeper insights into your AKS clusters and set up proactive alerting to prevent potential issues before they impact your applications. Learn More For more detailed information, refer to the following documentation: Monitor the control plane List of platform metrics in AKS Troubleshoot API server and etcd problems in AKS View the full article
×
×
  • Create New...