-
Posts
5742 -
Joined
-
Last visited
Everything posted by Windows Server
-
My existing Windows 11 Pro on my Dell laptop seems to have automatic bitlocker encryption enabled during/after installation. Now I want to do a clean reinstall bypassing this auto bitlocker encryption. I know Rufus app has provision to create a Windows 11 USB installer that will not enable bitlocker automatically. However the question is, do I need to fiddle with TPM in anyway before that or just wiping system drive and then installing Windows 11 USB installer created using Rufus with bypassing those options enabled will be enough? View the full article
-
Can anybody give comparison too but with Ryzen CPU too ? i want to know is it different the Handling of the CPU happened in Intel only or All modern CPU View the full article
-
Is it possible to change this via command line View the full article
-
I am just trying to install a clean Windows 11 on an HP Laptop 17-by4061nr. I cant get it to see the hard drive that I just formatted. I researched and it seems Intel removed the old way of loading the driver to a USB and its a SetupRST.exe now. View the full article
-
This is a less known new feature in Windows 11 24H2. JXL is arguably the single best image format available, in all aspects but supporting platforms. The more stuff that can handle it the better. View the full article
-
I recently exported nearly 500 heic photos from my iPhone, and found that many platforms (such as Windows computers, social media, and cloud drives) cannot directly preview or upload such files. I urgently need a solution that supports batch operations, preferably to preserve the original image quality without complicated operations. Is there a free, efficient tool or script that can achieve fast batch conversion of "heic to jpg"? Current dilemma: I tried an online "heic to jpg" converter tool that only transferred 5 photos at a time and erased the copyright information.I also tried to use Python scripts, but the EXIF information was garbled.Please provide suggestions, I don't want to toss! View the full article
-
Azure Load Testing (ALT) has been an essential tool for performance testing, enabling customers across industries to run thousands of tests every month. We are thrilled to celebrate its second anniversary with two major announcements. In this blog post, we will delve into the remarkable capabilities of ALT and reveal the exciting developments that will redefine load testing for you. Why do customers love ALT? ALT is a powerful service designed to ensure that your applications can handle high traffic and perform optimally under peak load. Here are some key features of ALT: Large-scale tests: Simulate over 100,000 concurrent users. Long-duration tests: Run tests for up to 24 hours. Multi-region tests: Simultaneously simulate users from any of the 20 supported regions. Continuous tests: Catch performance regression early by integrating with Azure Pipelines, GitHub Actions, or other CI/CD systems. Comprehensive test results: Correlate server-side metrics with client-side metrics for end-to-end insights. Analytics and insights: Quickly and easily identify performance bottlenecks with detailed analytics. Pricing Changes: Listening to Our Customers We have heard your feedback and are excited to announce significant pricing changes, effective March 1, 2025: No monthly resource fee: We have eliminated the monthly resource fee to help you save on overall costs. 20% price reduction: The cost per Virtual User Hour (VUH) for >10,000 VUH is reduced from 7.5 cents to 6 cents. Additionally, we are introducing a feature to set a consumption limit per resource. This will enable central teams, such as the Performance Center of Excellence, to effectively manage and control the costs incurred by each team. These changes reflect our commitment to making ALT more accessible and cost-effective, ensuring that you can optimize your applications without worrying about budget constraints. Locust-Based Tests: Offering Choice to Our Customers In another exciting development, we are delighted to announce the availability of Locust-based tests. This addition allows you to leverage the power, flexibility, and developer-friendly nature of the Python-based Locust load testing framework, in addition to already supported Apache JMeter load testing framework. We are also working on making it easy for you to generate tests by leveraging AI. With our integration with GitHub Copilot, you will be able to simply start with a Postman Collection or an HTTP file and leverage the copilot to generate Locust-based tests. Stay tuned! This update opens new possibilities for you, providing a choice of load testing frameworks and making it easy to generate tests. In Summary As we celebrate the second anniversary, we are committed to continually improving and evolving the service to meet your needs. With the introduction of half a dozen features (1. consumption limits, 2. Locust-based tests, 3. support for multiple test files, 4. scheduling, 5. notifications, 6. support for managed identity) apart from pricing changes, we are confident that ALT will continue to be an indispensable tool for your performance testing arsenal. We are excited about all the updates over two years and look forward to seeing how they enhance your testing processes. Thank you for being a part of our journey, and we can't wait to see what you achieve with ALT. If you would like to share how you were able to leverage ALT for an interesting scenario, email me at shon dot shah at microsoft dot com or post your feedback at https://aka.ms/malt-feedback. Happy load testing! View the full article
-
Recently, I found that the audio files recorded with voice memos on my phone are all in .m4a format, although they can be played normally, but I encountered a lot of trouble when sharing them among my friends - some devices don't support direct playback, and uploading them on some platforms is also restricted. I've heard that MP3 is the most versatile audio format, and I'd like to ask you computer experts how to batch convert M4A files to MP3 format. I need to balance the ease of operation and conversion efficiency and worried that the sound quality will be drastically reduced. Is there a recommended safe and reliable conversion way to convert m4a to mp3 on Windows 11? Online tools are convenient, but I don't dare to transfer private files. Lastly, there are multiple files (about 50), is there a shortcut for batch processing? View the full article
-
TOC Introduction to Triton System Architecture Architecture Focus of This Tutorial Setup Azure Resources File and Directory Structure ARM Template ARM Template From Azure Portal Testing Azure Container Apps Conclusion References 1. Introduction to Triton Triton Inference Server is an open-source, high-performance inferencing platform developed by NVIDIA to simplify and optimize AI model deployment. Designed for both cloud and edge environments, Triton enables developers to serve models from multiple deep learning frameworks, including TensorFlow, PyTorch, ONNX Runtime, TensorRT, and OpenVINO, using a single standardized interface. Its goal is to streamline AI inferencing while maximizing hardware utilization and scalability. A key feature of Triton is its support for multiple model execution modes, including dynamic batching, concurrent model execution, and multi-GPU inferencing. These capabilities allow organizations to efficiently serve AI models at scale, reducing latency and optimizing throughput. Triton also offers built-in support for HTTP/REST and gRPC endpoints, making it easy to integrate with various applications and workflows. Additionally, it provides model monitoring, logging, and GPU-accelerated inference optimization, enhancing performance across different hardware architectures. Triton is widely used in AI-powered applications such as autonomous vehicles, healthcare imaging, natural language processing, and recommendation systems. It integrates seamlessly with NVIDIA AI tools, including TensorRT for high-performance inference and DeepStream for video analytics. By providing a flexible and scalable deployment solution, Triton enables businesses and researchers to bring AI models into production with ease, ensuring efficient and reliable inferencing in real-world applications. 2. System Architecture Architecture Development Environment OS: UbuntuVersion: Ubuntu 18.04 Bionic BeaverDocker version:26.1.3 Azure Resources Storage Account:SKU - General Purpose V2Container Apps Environments:SKU - ConsumptionContainer Apps:N/A Focus of This Tutorial This tutorial walks you through the following stages: Setting up Azure resources Publishing the project to Azure Testing the application Each of the mentioned aspects has numerous corresponding tools and solutions. The relevant information for this session is listed in the table below. Local OS Windows Linux Mac V How to setup Azure resources and deploy Portal (i.e., REST api) ARM Bicep Terraform V 3. Setup Azure Resources File and Directory Structure Please open a terminal and enter the following commands: git clone https://github.com/theringe/azure-appservice-ai.git cd azure-appservice-ai After completing the execution, you should see the following directory structure: File and Path Purpose triton/tools/arm-template.json The ARM template to setup all the Azure resources related to this tutorial, including a Container Apps Environments, a Container Apps, and a Storage Account with the sample dataset. ARM Template We need to create the following resources or services: Manual Creation Required Resource/Service Container Apps Environments Yes Resource Container Apps Yes Resource Storage Account Yes Resource Blob Yes Service Deployment Script Yes Resource Let’s take a look at the triton/tools/arm-template.json file. Refer to the configuration section for all the resources. Since most of the configuration values don’t require changes, I’ve placed them in the variables section of the ARM template rather than the parameters section. This helps keep the configuration simpler. However, I’d still like to briefly explain some of the more critical settings. As you can see, I’ve adopted a camelCase naming convention, which combines the [Resource Type] with [Setting Name and Hierarchy]. This makes it easier to understand where each setting will be used. The configurations in the diagram are sorted by resource name, but the following list is categorized by functionality for better clarity. Configuration Name Value Purpose storageAccountContainerName data-and-model [Purpose 1: Blob Container for Model Storage] Use this fixed name for the Blob Container. scriptPropertiesRetentionInterval P1D [Purpose 2: Script for Uploading Models to Blob Storage] No adjustments are needed. This script is designed to launch a one-time instance immediately after the Blob Container is created. It downloads sample model files and uploads them to the Blob Container. The Deployment Script resource will automatically be deleted after one day. caeNamePropertiesPublicNetworkAccess Enabled [Purpose 3: For Testing] ACA requires your local machine to perform tests; therefore, external access must be enabled. appPropertiesConfigurationIngressExternal true [Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressAllowInsecure true[Purpose 3: For Testing] Same as above. appPropertiesConfigurationIngressTargetPort 8000[Purpose 3: For Testing] The Triton service container uses port 8000. appPropertiesTemplateContainers0Image nvcr.io/nvidia/tritonserver:22.04-py3 [Purpose 3: For Testing] The Triton service container utilizes this online resource. ARM Template From Azure Portal In addition to using az cli to invoke ARM Templates, if the JSON file is hosted on a public network URL, you can also load its configuration directly into the Azure Portal by following the method described in the article [Deploy to Azure button - Azure Resource Manager]. This is my example. Click Me After filling in all the required information, click Create. And we could have a test once the creation process is complete. 4. Testing Azure Container App In our local environment, use the following command to start a one-time Docker container. We will use NVIDIA's official test image and send a sample image from within it to the Triton service that was just deployed to Container Apps. # Replace XXX.YYY.ZZZ.azurecontainerapps.io with the actual FQDN of your app. There is no need to add https:// docker run --rm nvcr.io/nvidia/tritonserver:22.04-py3-sdk /workspace/install/bin/image_client -u XXX.YYY.ZZZ.azurecontainerapps.io -m densenet_onnx -c 3 -s INCEPTION /workspace/images/mug.jpg After sending the request, you should see the prediction results, indicating that the deployed Triton server service is functioning correctly. 5. Conclusion Beyond basic model hosting, Triton Inference Server's greatest strength lies in its ability to efficiently serve AI models at scale. It supports multiple deep learning frameworks, allowing seamless deployment of diverse models within a single infrastructure. With features like dynamic batching, multi-GPU execution, and optimized inference pipelines, Triton ensures high performance while reducing latency. While it may not replace custom-built inference solutions for highly specialized workloads, it excels as a standardized and scalable platform for deploying AI across cloud and edge environments. Its flexibility makes it ideal for applications such as real-time recommendation systems, autonomous systems, and large-scale AI-powered analytics. 6. References Quickstart — NVIDIA Triton Inference Server Deploying an ONNX Model — NVIDIA Triton Inference Server Model Repository — NVIDIA Triton Inference Server Triton Tutorials — NVIDIA Triton Inference Server View the full article
-
Hi team, My company just recently onboarded as learning partner. I have submitted the necessity. However, I didn't get any approval or acknowledgement about the achievement code since the submission on Monday. I also check any Microsoft courses and I also unable to locate the achievement code. Kindly advise? View the full article
-
Hello folks, Around two months ago, two of the F-keys stopped doing what they are supposed to (F5 & F6). F5 does indeed refresh the page (when pressed together with the Fn key) but it is also supposed to turn the screen brightness down (which it doesn't do anymore). Similarly, F6 doesn't turn the brightness up. F1 contunues to mute the sound, F2 to decrease the sound and F3 to increase the sound (I don't use F4 - turn mic off). F8 accessess the WiFi settings and F9 the general settings, F10 opens WiFi (Bluetooth) and F11 opens the Lenovo vantage App. All of these don't require an additional key press - i.e just press F1 to mute the sound immediately. F5 & F6 should alter the screen brightness, but they don't - nor do they in combination with other keys such as Fn, Alt, Ctrl or Shift. This has been going on for a couple of months and I haven't got to the bottom of it yet - I've updated Windows, updated the drivers and BIOS - all to no effect. Any ideas?! Art PC: lenovo ThinkPad T495s / Ryzen 7 Pro 3700U / Radeon Vega GFx 2.3GHz / 16Gb RAM / Win 11 Pro / Version 23H2 / Build 22631.4974 View the full article
-
This may be a dumb question and I apologize in advance if it is. I have so many unwanted programs, apps, and files on my laptop currently. I basically want to start from scratch. I also want to remain in the dev channel build #26120.3281 currently updating to (Windows 11 Insider Preview (10.0.26120.3291) I know how to factory reset my laptop but I was wondering if it would effect my status in the insiders program, and if so, how to successfully reset the laptop and remain in the program. View the full article
-
I have signed up up for windows insider beta channel and immediately opted out but I am still getting insider previews why is that how do I fix it? It said my device is queued for unenrollment but I still had to install an insider preview afterwards which I really wanted to avoid. Though right now I don't have any immediate problems, I still don't want to keep getting new insider builds and leave Windows Insider Program without reinstalling what is the safest way to do so? should I wait in for a big update or is there any other way?? View the full article
-
My PC takes between 60-80 seconds before the Windows logo comes up. I created a Process Monitor logfile and in there were a dozen entries with Result=CANCELLED which had duration of 3 to 17 seconds. The one taking 17.8 seconds was trying to access a file, "C:\Users\Public\AccountPictures\S-1-5-21-3415986207-50523673-2978598619-1001\{F0230BAE-7D60-44C0-B949-B0EF3DE3E0FF}-Image192.jpg" which does not exist. It does exist in "C:\Users\Public\Public AccountPictures\" so there is a mismatch in my system. I clean installed Windows 11 two years ago and have installed all upgrades and that's what's be done to the system. As for the other long duration entries they all exist i the C:\windows\system32\config folder. What's the problem with my system and what can I do to fix it? I searched the registry for one of the file names and found Computer\HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\AccountPicture\Users\S-1-5-21-3415986207-50523673-2978598619-1001 so the registry points to the wrong folder. Folder AccountPicture dos not exist in my system, it's called Public AccountPictures. View the full article
-
On my Windows 11 PC I have Open Shell installed (I mentioned this in case it's relevant). A few hours ago I had a dialog box prompt in Windows to install a 'Start Menu' update. I was in a hurry to get something done and just accepted the update, acknowledging it via the usual separate Windows 'do you trust' this software?' dialog box.. Now though I'm wondering if I've inadvertently run something malicious. I should point out that the PC regularly gets virus checked with Eset's antivirus software, and since the suspect install I've also re-run Eset as well as Malwarebytes and all is clean. However it's really bugging me. I don't think it was Open Shell doing this because, even though I don't know what version it was on prior to this Start Menu update, I've now checked and it's not on the most recent version, so I'm not convinced it was that causing the prompt. Not sure what else to check really, all looks good so far. I've spent ages online looking for any potential issues but have come up with zero. I should note that if I recall correctly I may have had a similar prompt on my other PC (running Windows 10) a few days ago and this runs Classic Shell, the forerunner to Open Shell. I said No to that update though. So I'm puzzled. Any ideas please? Can I see any kind of history of 3rd party updates within Win 11 ? (I don't mean Microsoft updates). View the full article
-
I have a C drive on SSD, and have a SATA internal drive (F) that I want to use to send Music, Downloads, Documents, Pictures, Videos, for storage, automatically. Currently they go to: "C:\Users\there\OneDrive\Documents" I have no interest in using OneDrive, and frankly thought I had deleted it. Its like a bad penny, it keeps coming back. I want to change to: F:\Documents and the same for all of the other. Any help would be appreciated. TRS P.S. I have no idea why the user name on the C drive is "there" instead of my name. "There" is the first 5 letters of my email address. View the full article
-
Prior MacOS user who's used to the Finder's column file navigation--so many folders available at one time to drop, copy or cut files. But in Windows 11, it's not possible to column navigate files. Any tips on how to handle general file navigation in Windows 11? Pictures appreciated. View the full article
-
When I first registered and logged in to co pilot i had a character limit of 128000 now it changed to 8000 how do i put it back to 128000. View the full article
-
In the fast-evolving world of cloud computing, Platform as a Service (PaaS) drives innovation, agility, and scalability like never before. As organizations unlock its full potential, ensuring strong security measures remains essential. With the cloud landscape continuously evolving, adopting proactive security strategies helps organizations stay resilient against emerging threats. The security gaps in PaaS Unlike Azure Virtual Networks, which provide a strong security perimeter for compute resources, PaaS services operate in a different security model. While they include network controls, there is an opportunity to enhance granularity and deepen virtual network integrations. Strengthening these areas can help reduce potential security blind spots that attackers might attempt to exploit. Additionally, the reduced visibility into infrastructure and the complexities of shared responsibility models make securing PaaS environments a unique challenge. So, what’s the solution? To bridge these gaps, organizations must adopt a new security paradigm—one that moves beyond traditional models and embraces zero-trust security specifically tailored for PaaS environments. Data exfiltration: The silent threat As organizations increasingly rely on PaaS, the risk of unauthorized data exposure grows. Without proper controls, sensitive data can be maliciously or accidentally leaked, resulting in compliance violations, financial losses, and reputational damage. 🔐 Case study: In a recent incident, attackers exploited misconfigured access controls to exfiltrate sensitive data from a cloud-based platform. The lack of network segmentation and outbound traffic restrictions allowed unauthorized data transfers, going undetected until it was too late. 🔑 The takeaway: To mitigate data exfiltration risks, enforce strict outbound traffic controls, conduct regular access policy audits, and implement monitoring for early threat detection. This proactive approach helps ensure that sensitive data remains safe from both internal and external threats. The visibility void PaaS streamlines deployment by abstracting the underlying infrastructure, though there is an opportunity to enhance visibility into security events. By improving access to logs, network traffic insights, and threat monitoring, organizations can strengthen their ability to detect and respond to potential security incidents more effectively. 🔎 Solution: Organizations must implement comprehensive security telemetry, logging, and automated monitoring tools to gain deeper visibility into their PaaS environments. These solutions help identify potential threats before they escalate into full-blown security incidents. The shared responsibility conundrum Navigating the shared responsibility model in PaaS security can be challenging. While cloud providers secure the underlying infrastructure, customers are responsible for application security, configurations, and access management. A lack of clarity in these roles often leads to security gaps. ⚠️ Case study: In a 2024 breach, attackers exploited inadequate network access controls to access sensitive data without authorization. Although the PaaS platform itself was secure, the incident underscored the importance of implementing strong customer-side security measures. 🔑 The takeaway: Enforcing zero-trust principles, least-privilege access, and strong authentication protocols is essential to mitigate such attacks. Insider threats: The growing risk from within Insider threats continue to be one of the most insidious risks in cloud security, particularly in PaaS environments. While external attackers often capture the spotlight, insiders—whether malicious or negligent—can exploit system vulnerabilities, misconfigurations, or weak access controls to gain unauthorized access to sensitive data. Insiders often have legitimate access to systems and networks, making these threats harder to detect. ⚠️ Case study: In a 2024 breach, an employee’s compromised credentials were used to exfiltrate sensitive customer data from a cloud-based application. The attack went undetected for weeks due to insufficient internal traffic monitoring and overly broad access permissions. 🔑 The takeaway: Address insider threats by implementing strong access controls, continuous monitoring, and proper segmentation of duties Azure's network security perimeter: A game-changer for PaaS security To address the evolving threat landscape in cloud environments, Microsoft Azure has introduced network security perimeter, a powerful innovation that reinforces a multi-layered security approach for PaaS resources. By embracing zero-trust principles and leveraging identity-aware perimeter architectures, organizations can secure their cloud-based assets more effectively than ever before. What makes network security perimeter a must-have? Azure's network security perimeter provides a robust set of features to safeguard PaaS environments. Here’s how it helps secure your cloud assets: ✅ Micro-segmentation and least-privilege access – Take full control over who and what can access your PaaS resources. With finely tuned access rules, administrators can regulate inbound and outbound traffic, enforce least-privilege access, and reduce the attack surface. ✅Data exfiltration prevention – When PaaS resources are in enforced mode, all public traffic is automatically blocked, preventing unauthorized data leaks and ensuring a secure, controlled environment for your sensitive data. ✅Seamless hybrid cloud security – Securely connect your on-premises and cloud environments using private endpoints, eliminating exposure to the public internet. This boosts security in hybrid cloud deployments. ✅ Unified security management – Eliminate the complexity of managing security policies for each PaaS resource individually. Group multiple PaaS resources under a single security profile, simplifying access control and creating a centralized, streamlined security approach. ✅ Enhanced monitoring and compliance – Gain deep visibility into your security posture. With perimeter access logs, organizations can monitor traffic patterns, detect anomalies, and respond to security threats—keeping compliance in check. Key use cases for network security perimeter Azure's network security perimeter offers effective, real-world security solutions tailored for PaaS environments. - Network isolation: Establish a protective perimeter around PaaS resources, blocking unauthorized access and preventing data exfiltration to unauthorized destinations. - Private hybrid connectivity: Enables secure on-prem-to-cloud connections with private endpoints. - Granular access control: Administrators can define explicit access rules, ensuring only trusted users and applications interact with PaaS resources. - Centralized security management: Streamlines security configurations, reducing misconfigurations and minimizing security risks. - Regulatory compliance and auditing: Provides detailed access logs that are essential for audit and compliance readiness, making it easier to meet regulatory requirements. 🚀 Why network security perimeter matters now more than ever The rise in PaaS-targeted attacks demands a stronger defense strategy. The breaches in 2024 made one thing crystal clear: access controls and identity security are mission critical. Network security perimeter closes the security gaps, ensuring only the right entities access your most valuable cloud assets. Final thoughts: future-proofing PaaS security PaaS offers unmatched efficiency, but security must always be a top priority. Organizations need to fortify key pillars such as identity management, data protection, access control, and visibility to defend against evolving cyber threats. By leveraging Azure’s network security perimeter, organizations can go beyond traditional security measures and embrace a more proactive, intelligent, and resilient cloud security posture. 🔹 Ready to take control of your PaaS security? Explore Azure's network security perimeter today and safeguard your cloud journey! View the full article
-
If you've ever wanted to try out Azure Virtual Network Manager (AVNM) but weren't sure where to start, our team invites you to try out our lab for virtual network management at scale with AVNM! This lab serves as an introductory guide to AVNM and its many useful features. The full lab is also published in this repository. Let's dive in! Welcome to our lab for virtual network management at scale with AVNM! Check out Azure Virtual Network Manager's public documentation for more info! It also contains how-to guides for all available features in the Azure Portal if you encounter issues on the lab's steps. Check prerequisites and deploy the lab's ARM template. Before you get started with this lab, you'll want to check whether you meet the following prerequisites: General prerequisites Network group-related permissions Azure Policy permissions If you're confident you have the right set of permissions and access level to the subscription(s) where you want to test AVNM, then go ahead and deploy the lab's ARM template. This template will deploy virtual networks and their subnets, a network security group, and virtual machines and their network interface cards. Remember to clean up these resources after your lab -- especially the VMs -- for security and cost purposes! Log in to the Azure Portal and search for "Deploy a custom template." Select "Build your own template in the editor," copy-paste the contents of the lab's ARM template, and select the Save button. Select the desired subscription, resource group, and region where you want to deploy this template's resources, then Review + create. Create your Azure Virtual Network Manager instance. You're tasked with setting up connectivity, security, routing, and more among several virtual networks. To achieve this, you need an instance of Azure Virtual Network Manager (AVNM) -- a network manager. In the Azure Portal, search for "Network managers" and create a new one in your desired subscription, resource group, and region. Select all feature options. Your network manager can manage resources outside of the region it's created in! Proceed to the Management scope tab and only add your current test subscription to this network manager's scope. This step defines the boundary around the network resource that this network manager will be able to manage. Please only add your lab's subscription to the network manager's scope! Otherwise, you could impact your teammates' environments. In general, you can add several subscriptions and/or management groups to the network manager's scope. Finish creating this network manager. Navigate to your new network manager instance. This is where you'll group your network resources, configure your desired settings, and more! Segment your virtual networks into network groups. Before diving into setting up connectivity, security, routing, and more across your network resources, you first need to group the 50+ virtual networks in your scope. Inside your network manager, expand the Settings in the left-hand menu and navigate to the Network groups blade. Create 3 network groups of Virtual network member type. One group will represent all the virtual networks within the subscription, another will represent trusted virtual networks, and the last will represent non-trusted virtual networks. The network groups you created are currently empty. To populate them, you can manually add members, or you can dynamically add members with Azure Policy. All virtual networks group Navigate into this network group and select the Create Azure Policy button to automatically populate its members. Use the GUI to include any virtual network that belongs to the resource group where you deployed the ARM template. You can select the Preview resources button to check what members this Policy will pick up. There should be 51 virtual networks shown. Save and exit from this network group. Trusted virtual networks group Navigate into this network group and select the Create Azure Policy button to automatically populate its members. Use the GUI to include any virtual network that has a tag with the key value pair with env as the key and Trusted as the value. You can select the Preview resources button to check what members this Policy will pick up. There should be 25 virtual networks shown. Save and exit from this network group. Non-trusted virtual networks group Navigate into this network group and select the Create Azure Policy button to automatically populate its members. Use the GUI to include any virtual network that has a tag with the key value pair with env as the key and nonTrusted as the value. You can select the Preview resources button to check what members this Policy will pick up. There should be 25 virtual networks shown. Save and exit from this network group. It may take up to a couple minutes for the Azure Policy to fully populate the network groups with member virtual networks. You can proceed without needing to wait for network groups to be fully populated. You're finished setting up your network groups! Now it's time to build your desired configurations for connectivity, security, and routing, and deploy them across your network groups. Set up a hub and spoke topology. You want to build bi-directional connectivity between a hub virtual network and all of your trusted and non-trusted virtual networks as spokes. You also want your trusted virtual networks to be able to talk to one another (but not with the non-trusted virtual networks). Depending on how many spoke virtual networks you have, this could be very tedious -- but with AVNM, this can be set up in just a few clicks! Inside your network manager, expand the Settings in the left-hand menu and navigate to the Configurations blade. Create a Connectivity configuration. On the Topology tab, create a Hub and spoke topology with the hubVNet as its hub. For the Spoke network groups, add your trusted virtual networks group and your non-trusted virtual networks group.You may need to zoom out (Ctrl -) in order to add network groups on this tab. For your trusted virtual networks group, you'll want those virtual networks to be able to communicate directly to one another without needing to hop through the hub. Enable direct connectivity for this network group across regions.This step builds a mini global mesh among the members of this network group. The trusted virtual networks can all talk to each other directly, but not to the non-trusted virtual networks. Check out the Visualization tab, then Review + create the configuration. Select Create and start deployment to get a headstart on pushing this connectivity to your virtual networks. In the Deploy a configuration pane, your connectivity configuration should be populated already. Select the region where you deployed the ARM template, then review and deploy. If the site brings you back to the Configurations page, refresh the pane, select your connectivity configuration, and select the Deploy button to deploy the configuration to the region where you deployed the ARM template. Creating a configuration alone won't affect your target virtual networks. You must deploy your configurations into your desired regions to take effect. You've built your desired network topology! Let's take a look at securing your virtual networks next. Secure your virtual networks with a baseline ruleset. Your organization has identified some high-risk network ports that you need to block across all of your virtual networks. You also need to block additional ports specifically for your trusted virtual networks. Inside your network manager, expand the Settings in the left-hand menu and navigate to the Configurations blade. Create a Security admin configuration. On the Rule collections tab, add a rule collection that will contain the security admin rules covering all your virtual networks. This rule collection should target the network group containing all the virtual networks. Add a security admin rule to deny inbound TCP traffic to the destination ports 20-23. Or if you'd like, you can create 4 separate rules for each destination port 20, 21, 22, and 23. You do not need to include any network groups in the source or destination of the rule itself. By targeting the network group at the rule collection level, any rules defined in the rule collection will be applied onto the target network group. On the Add a rule collection pane, select the Add button to finish adding this complete rule collection to the security admin configuration. Add another rule collection that will contain an extra rule just for your trusted virtual networks. This rule collection should target the network group containing the trusted virtual networks. Add a security admin rule to deny inbound TCP traffic to the destination port 445. On the Add a rule collection pane, select the Add button to finish adding this complete rule collection to the security admin configuration. See what you did here? You can associate rule collections with different network groups to achieve modularity for your security rules. This same mechanism can be used to provide "exceptions" in org-wide security rules to particular virtual networks. Review + create this configuration and select Create and start deployment to get a headstart on pushing these security admin rules to your virtual networks. In the Deploy a configuration pane, your security admin configuration should be populated already. Select the region where you deployed the ARM template, then review and deploy. All of your virtual networks now have security guardrails! Downstream NSGs will not be able to conflict with these security admin rules, as traffic denied by the security admin rules will be dropped upon contact with those rules. Route non-trusted spoke-to-spoke traffic through an Azure Firewall. Don't forget to route traffic between your non-trusted virtual networks through the Azure Firewall residing in your hub virtual network! Inside your network manager, expand the Settings in the left-hand menu and navigate to the Configurations blade. Create a Routing configuration. On the Rule collections tab, add a rule collection that will contain the routing rule for your non-trusted virtual networks. This rule collection should target the network group containing non-trusted virtual networks. Add a routing rule to steer spoke virtual network traffic toward the hub virtual network's Azure Firewall. The Destination should describe the default route (IP address 0.0.0.0/0). The Next hop should be a Virtual appliance and its address will be "10.0.3.68", which represents the Azure Firewall's IP address. Add the routing rule to the routing rule collection. On the Add a rule collection pane, select the Add button to finish adding this complete rule collection to the routing configuration. Review + create this configuration and select Create and start deployment to get a headstart on deploying these routing rules to your virtual networks' subnets. In the Deploy a configuration pane, your routing configuration should be populated already. Select the region where you deployed the ARM template, then review and deploy. Upon deployment, AVNM will create the user-defined routes (UDRs) for all your non-trusted virtual networks' subnets. Traffic between these virtual networks will be routed through the IP address of the Azure Firewall in your hub virtual network. Network groups containing subnets can also be used with AVNM's routing configuration. Check out AVNM's public documentation or UDR management blog for more scenarios that AVNM's routing configuration can address. Manage the IP addresses of your virtual networks. Now let's take a look outside of AVNM's group-configure-deploy mechanisms. AVNM's IP address management (IPAM) feature lets you create pools for IP address planning, automatically assign non-overlapping CIDR addresses to Azure resources, and prevent address space conflicts across on-premises and multi-cloud environments. You're trying to plan for another hub and spoke topology between your hub virtual network and 5 spoke virtual networks. You know you'll also have to create a new virtual network and connect it to this topology. Let's walk through how IPAM can help you ensure there are no overlapping address spaces between the virtual networks that you want to connect in this topology, and even create a new spoke virtual network with guaranteed non-overlapping address space. Inside your network manager, expand the IP address management in the left-hand menu and navigate to the IP address pools blade. Create an IP address pool. On the IP addresses tab, specify the address space "10.0.0.0/16" that will cover the address space of 5 spoke virtual networks and hub virtual network. Review + create this IP address pool and select Create. Navigate into your new IP address pool. Expand the Settings in the left-hand menu and navigate to the Allocations blade. Let's associate 6 of your virtual networks -- 1 hub virtual network and 5 spoke virtual networks -- to this pool so you can check if there are overlaps in address space and monitor your IP utilization. Select the Associate resources button and associate any 5 of the spoke virtual networks and the 1 hub virtual network (named hubVNet). Refresh the pane. Notice how you can also allocate from this IP address pool by carving out address space for child pools and for static CIDR blocks to represent on-premises or non-Azure resources. What happens when you need to create another virtual network? During creation, you can actually set up the virtual network's IP address space from the available IP address space in this pool. Search in the Azure Portal for "Virtual networks" and create a new one in your resource group and region. On the IP addresses tab, check the box to Allocate using IP address pools and Select an IP address pool -- the one you just created! Save and Review + create this virtual network. In just a few minutes, you were able to create an IP address pool to track your hub and spoke virtual networks' IP address space, and even create a new virtual network from this IP address pool! You can also delegate IP address pools to non-AVNM users so they can create their virtual networks from their corresponding pool and enforce IP address management. Verify reachability between some of your spokes' virtual machines. You just set your organization up for success with AVNM and its features! There are several moving parts between connectivity, security, routing, and resource-specific configurations. So how do you know that what you've set up in your Azure environment is actually achieving the reachability you desire among your network resources? This is where AVNM's virtual network verifier tool can help us check this reachability. Whether you're troubleshooting traffic disallowance, diagnosing unexpected traffic allowance, or proving conformance to your organization's security requirements, virtual network verifier can provide the answers. You're confident in your AVNM setup, but somehow some traffic still isn't being delivered between two of your spoke virtual networks' VMs. Let's use virtual network verifier to pinpoint where the issue lies. Navigate back to your network manager by searching in the Azure Portal for its name or by searching for "Network managers." Inside your network manager, expand the Virtual network verifier in the left-hand menu and navigate to the Verifier workspaces blade. Create a verifier workspace. Did you know that you can delegate verifier workspaces to non-AVNM users? This will not give them access to the parent network manager, but it will enable them to run reachability analyses on their resources that evaluate over the scope of the network manager -- all without elevating their permissions. For example, a delegated user can check why their VM can't reach the internet, and if a security admin rule coming from a network manager they don't have access to is denying that traffic, they'll still be able to see metadata about that security admin rule. Navigate into your new verifier workspace and select the Define a reachability analysis intent button. Create a reachability analysis intent. This is where you can describe the traffic details of the source-to-destination path you want to verify -- in this case, that of one VM to another VM, which reside in two of your trusted spoke virtual networks. The protocol of the path you want to check is TCP. The source and destination types should be Virtual machines. The source VM is vm1 and the destination VM is vm2. The IP addresses should autofill after selecting the source and destination resources. If not, the source IP address is "10.0.49.4" and the destination IP address is "10.0.50.4". The destination port should be "80". Create this reachability analysis intent. Inside your verifier workspace, expand the Settings in the left-hand menu and navigate to the Reachability analysis intents blade. On the Reachability analysis intents blade, Refresh the pane, select the intent you just created, and Start analysis. Name the analysis and select the Start analysis button. This analysis may take 1-2 minutes to process. Refresh the pane to see when the analysis is finished, at which point you'll see View results available for this intent. View results of this intent's analysis run. A new pane should open where you can see a visualization of the reachability path of the intent you created. You can interact with this visualization by clicking on the resource icons and the paths between each node. This will open more details about the resource or step. You can also switch to the JSON output tab to see the full analysis result. Check out the visualization and select the path edge right before the traffic-blocked icon! Thanks for participating in our lab! Remember to clean up all network manager and template resources after your lab for security and cost purposes! View the full article
-
Creating AI agents using Azure AI Foundry is a game-changer for businesses and developers looking to harness the power of artificial intelligence. These AI agents can automate complex tasks, provide insightful data analysis, and enhance customer interactions, leading to increased efficiency and productivity. By leveraging Azure AI Foundry, organizations can build, deploy, and manage AI solutions with ease, ensuring they stay competitive in an ever-evolving technological landscape. The importance of creating AI agents lies in their ability to transform operations, drive innovation, and deliver personalized experiences, making them an invaluable asset in today's digital age. Let's take a look at how to create an agent on Azure AI Foundry. We'll explore some of the features and experiment with its capabilities in the playground. I recommend by creating a new resource group with a new Azure OpenAI resource. Once the Azure OpenAI Resource is created, follow these steps to get started with Azure AI Foundry Agents. Implementation Overview Open Azure AI Foundry and click on the Azure AI Foundry link at the top right to get to the home page where you'll see all your projects. Click on + Create project then click on Create new hub Give it a name then click Next and Create New resources will be created with your new project. Once inside your new project you should see the Agents preview option on the left menu Select your Azure OpenAI Service resource and click Let's go We can now get started with implementation. A model needs to be deployed. However, it's important to consider which models can be used and their regions for creating these agents. Below is a quick summary of what's currently available. Current supported models for Agent development from Azure OpenAI Supported models in Azure AI Agent Service - Azure AI services | Microsoft Learn Other models supported include Meta-Llama-405B-Instruct, Mistral-large-2407, Cohere-command-r-plus, and Cohere-command-r. I've deployed gpt-4 as Global Standard and can now create a new agent. Click on +New agent. A new Agent will be created and details such as the agent instructions, model deployment, Knowledge and Action configurations, and model settings are shown. Incorporating knowledge into AI agents is to enhance their ability to provide accurate, relevant, and context-specific responses. This makes them more effective in automating tasks, answering complex queries, and supporting decision-making processes. Actions enable AI agents to perform specific tasks and interact with various service and data sources. Here we can leverage these abilities by adding a Custom Function, OpenAPI 3.0 specified tool, or an Azure function to help run tasks. The Code Interpreter feature within Actions empowers the agent to read and analyze datasets, generate code, and create visualizations such as graphs and charts. In the next section we'll go deeper with code interpreters' abilities. Code Interpreter For this next step I'll leverage weatherHistory.csv file from Weather Dataset for code interpreter to perform on. Next Actions click on + Add then click on Code interpreter and add the csv file. Update the Instructions to "You are a Weather Data Expert Agent, designed to provide accurate, up-to-date, and detailed weather information." Lets explore what Code interpreter can do. Click on Try in playground on the top right. I'll start by asking "can you tell me which month had the most rain?", code interpreter already knows that I'm asking a question in reference to the data file I just gave it and will breakdown the question into multiple steps to provide the best possible answer. We can see that based on the dataset, August 2010 has the most where 768 instances of rainfall were recorded. We'll take it a step further and create a graph using a different question. Let's ask the agent "ok, can you create a bar chart that shows the amount of rain fall from each year using the provided dataset?" in which the agent will respond with the following: This is just a quick demonstration of how powerful code interpreter can be. Code interpreter allows for efficient data interpretation and presentation as shown above, making it easier to derive insights and make informed decisions. We'll create and add a Bing Grounding Resource which will allow an agent to include real-time public web data into their responses. Bing Grounding Resource A Bing Grounding Resource is a powerful tool that enables AI agents to access and incorporate real-time data from the web into their responses and also ensures that the information provided by the agents is accurate, current, and relevant. An agent will be able to perform Bing searches when needed, fetching up-to-date information and enhancing the overall reliability and transparency of its responses. By leveraging Bing Grounding, AI agents can deliver more precise and contextually appropriate answers, significantly improving user satisfaction and trust. To add a Bing Ground Resource to the agent: Create the Resource: Navigate to the Azure AI Foundry portal and create a new Bing Grounding resource. Add Knowledge: Go to your agent in Azure AI Foundry, click on + Add next to Knowledge on the right side, select Grounding with Big Search, + Create connection. Add connection with API key. The Bing Grounding resource is now added to your agent. In the playground I'll add first ask "Is it raining over downtown New York today?". I will get a live response that also includes the links to the sources where the information was retrieved from. The agent responds as shown below:Next i'll ask the agent "How's should I prepare for the weather in New York this week? Any clothing recommendations?" in which the agent responds: The agent is able to breakdown the question using gpt-4 in detail by leveraging the source information from Bing and providing appropriate information to the user. Other the capabilities of custom functions, OpenAPI 3.0 specified tools, and Azure Functions significantly enhance the versatility and power of Azure AI agents. Custom functions allow agents to perform specialized tasks tailored to specific business needs, while OpenAPI 3.0 specified tools enable seamless integration with a wide range of external services and APIs. Azure Functions further extend the agent's capabilities by allowing it to execute serverless code, automating complex workflows and processes. Together, these features empower developers to build highly functional and adaptable AI agents that can efficiently handle diverse tasks, drive innovation, and deliver exceptional value to users. Conclusion Developing an AI Agent on Azure AI Foundry is a swift and efficient process, thanks to its robust features and comprehensive tools. The platform's Bing Grounding Resource ensures that your AI models are well-informed and contextually accurate, leveraging vast amounts real-time of data to enhance performance. Additionally, the Code Interpreter simplifies the integration and execution of solving complex tasks involving data analysis. By utilizing these powerful resources, you can accelerate the development of intelligent agents that are not only capable of understanding and responding to user inputs but also continuously improving through iterative learning. Azure AI Foundry provides a solid foundation for creating innovative AI solutions that can drive significant value across various applications. Additional Resources: Quickstart - Create a new Azure AI Agent Service project - Azure AI services | Microsoft Learn How to use Grounding with Bing Search in Azure AI Agent Service - Azure OpenAI | Microsoft Learn View the full article
-
Hi community, I just joined to equip myself with Excel skills. I have gone through the Learn It beginner tutorial, and to practice on my own, I would be glad to have some illustrations to try my hands on them. View the full article
-
Hi community, I just joined to equip myself with Excel skills. I have gone through the Learn It beginner tutorial, and to practice on my own, I would be glad to have some illustrations to try my hands on them. View the full article