Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5710
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Hi everyone, I'm indie author who is just getting started. I recently finished an ebook and would like to add my personal logo as a watermark or cover element in my own PDF version. However, I'm not too familiar with PDF editing tools, and the tutorials I found online are either complicated in terms of steps or require paid software, which I'd prefer to add logo to pdf in a free or low-cost way. I insert the logo paste in Word and then converted to PDF, but the results in typographical errors; And the file size is too large by doing this with online pdf watermark tools. Although I know that this may be a basic problem, but I really do not have a clue. I implore you to share how to do this. View the full article
  2. When printing a pdf using MS edge cuts-off the bottom portion of each page. This behavior happens even if I use a large margin. So, definitely it's not an issue with the margin. I tried Firefox, and issue was not there. Firefox was able to print the whole page properly even without any margin. I also tried same PDF to print via phone - No issues. So, definitely it's not an issue with the printer. Culprit is definitely MS Edge. Page: A4Printer: Cannon Pixma E470View the full article
  3. A InteligĂȘncia Artificial estĂĄ se tornando cada vez mais acessĂ­vel para desenvolvedores. PorĂ©m, um dos maiores desafios ainda Ă© o custo das APIs de modelos avançados, como GPT-4o e tantos outros. Felizmente, o GitHub Models veio para mudar esse cenĂĄrio! Agora, vocĂȘ pode experimentar IA de graça, sem precisar de uma chave de API paga ou baixar modelos pesados em suas mĂĄquinas locais. Neste artigo, vamos explicar em detalhes o que Ă© o GitHub Models e como utilizar gratuitamente com TypeScript em um projeto prĂĄtico. Escolhemos como exemplo o projeto Microblog AI Remix, um projeto open source de microblog com funcionalidades de IA. Iremos abordar a estrutura desse projeto e demonstrar passo a passo como integrar com o GitHub Models, substituindo a necessidade de fazer uso de LLMs pagos, incluindo comparaçÔes de cĂłdigo antes e depois das modificaçÔes. TambĂ©m veremos como configurar e executar o projeto localmente, e discutiremos as vantagens e limitaçÔes do GitHub Models para prototipagem de projetos em IA. Pode ter certeza que, ao final deste artigo, vocĂȘ terĂĄ uma compreensĂŁo sĂłlida do GitHub Models e como utilizĂĄ-lo em seus prĂłprios projetos e começar a explorar o mundo da IA de forma acessĂ­vel e gratuita. Vamos lĂĄ! O que Ă© o GitHub Models? O GitHub Models Ă© uma iniciativa do GitHub que disponibiliza uma coleção de modelos de IA prontos para uso, integrados Ă  plataforma. Pense no GitHub Models como um Marketplace de Modelos de IA: onde os desenvolvedores podem descobrir modelos de linguagem grande (LLMs) de diferentes provedores, testar suas capacidades em um playground interativo e incorporar em suas aplicaçÔes de forma simplificada. HĂĄ modelos de diversas origens e portes - por exemplo, o OpenAI GPT-4o, modelos Open Source como o Meta Llama 3.1, o Phi-3 da Microsoft, o Mistral Large 2, entre outros. Todos podem ser acessados gratuitamente para fins de experimentação. Uma das grandes vantagens do GitHub Models Ă© permitir o uso gratuito desses modelos durante a fase de prototipagem. Ou seja, vocĂȘ pode testar e construir um proof of concept (POC) sem custos, utilizando a infraestrutura fornecida pelo GitHub. Na prĂĄtica, existem duas maneiras de interagir com os modelos: Playground (interface web): nesse playground, vocĂȘ pode testar os modelos diretamente no navegador do GitHub. Nele, vocĂȘ pode fazer perguntas e obter respostas em tempo real com diferentes modelos, ajustar parĂąmetros (temperatura, nĂșmero mĂĄximo de tokens e etc.) e atĂ© comparar lado a lado a saĂ­da de dois modelos diferentes. Via API/SDK: agora, caso vocĂȘ necessite integrar algum modelo em um projeto, o GitHub Models tambĂ©m disponibiliza uma API REST e SDKs para diversas linguagens, como Python, JavaScript/TypeScript, Java, C#, e REST. Cada modelo possui um endpoint de inferĂȘncia pĂșblico. VocĂȘ pode fazer chamadas HTTP para esses endpoints ou usar SDKs (como o SDK do Azure OpenAI ou o prĂłprio SDK do GitHub Models) em diversas linguagens. A autenticação Ă© feita de forma simples com um token de acesso pessoal do GitHub (PAT), sem necessidade de chaves de API separadas. Basta gerar um PAT na sua conta do GitHub (sem escopos especiais, usando a opção Beta disponĂ­vel)​, e usĂĄ-lo nas requisiçÔes. Em outras palavras, seu token do GitHub funciona como a credencial para chamar o modelo, dentro dos limites gratuitos de uso. Vantagens e LimitaçÔes Mas, tudo que Ă© gratuito tem suas limitaçÔes, certo? EntĂŁo, vamos explicar as limitaçÔes do GitHub Models. Atualmente, hĂĄ restriçÔes de chamadas por minuto e por dia, quantidade de tokens por requisição e nĂșmero de requisiçÔes simultĂąneas. Por exemplo, modelos de categoria low (menores) permitem algo em torno de 15 requisiçÔes por minuto e 150 por dia, enquanto modelos high (como o GPT-4o) possuem limites um pouco mais baixos por serem mais pesados​ Mas, e se curtiu o modelo e desejar colocar em produção? Bom, nesse caso, o GitHub Models sugere a migração para um endpoint pago do Azure – e o interessante Ă© que basta trocar o token do GitHub por uma chave do Azure que o resto do cĂłdigo continuarĂĄ funcionando, sem necessidade de alteraçÔes adicionais! Resumindo: o GitHub Models Ă© uma maneira prĂĄtica de encontrar e experimentar modelos de IA de ponta gratuitamente​. Com ele, desenvolvedores podem incorporar funcionalidades de IA em projetos TypeScript (ou de outras linguagens) usando apenas uma conta GitHub. A seguir, conheceremos o exemplo do Microblog AI Remix e, depois, veremos na prĂĄtica como usar o GitHub Models nesse projeto. Microblog AI Remix com GitHub Models O Microblog AI Remix (ou simplesmente Microblog AI) Ă© um projeto de exemplo que combina um aplicativo web de microblog com recursos de InteligĂȘncia Artificial. Ele foi criado para demonstrar como construir aplicaçÔes web modernas e escalĂĄveis utilizando a stack da Microsoft Azure juntamente com tĂ©cnicas de Server-Side Rendering (SSR) e IA generativa. Em alto nĂ­vel, o Microblog AI permite que usuĂĄrios criem e visualizem pequenos posts de blog (microblogs), contando com a ajuda de um modelo de IA avançado para gerar conteĂșdo a partir de sugestĂ”es do usuĂĄrio. Aproveito para pedir a testarem o projeto, dando um fork e contribuindo com melhorias. O projeto Ă© open source e pode ser testado no GitHub Codespaces. Deixa a sua estrela ⭐ e contribua com melhorias! Originalmente o projeto utiliza o Azure OpenAI como provedor de IA, mas vamos substituĂ­-lo pelo GitHub Models para proporcionar uma alternativa gratuita e acessĂ­vel. Passo a Passo para Configurar o Microblog AI Eu gravei um vĂ­deo mostrando o passo a passo em como migrar o projeto para o GitHub Models. O vĂ­deo estĂĄ disponĂ­vel no meu canal do YouTube (em portuguĂȘs) e vocĂȘ pode assisti-lo aqui: Antes de tudo precisamos clonar o projeto e configurar as dependĂȘncias. Siga os passos abaixo: Clone o repositĂłrio oficial do Microblog AI Remix: git clone https://github.com/Azure-Samples/microblog-ai-remix.git cd microblog-ai-remix 2. Instale as dependĂȘncias do projeto: npm install cd server npm install 3. Crie um arquivo .env na raiz do projeto e adicione as seguintes variĂĄveis de ambiente: GITHUB_MODELS_ENDPOINT=https://models.inference.ai.azure.com GITHUB_MODELS_TOKEN=SEU_TOKEN Esse token vocĂȘ pode gerar na sua conta do GitHub em Settings > Developer Settings > Personal Access Tokens > Generate new token (beta). 4. No diretĂłrio /server, crie o arquivo local.settings.json com o seguinte conteĂșdo: { "IsEncrypted": false, "Values": { "AzureWebJobsStorage": "UseDevelopmentStorage=true", "FUNCTIONS_WORKER_RUNTIME": "node", "GITHUB_MODELS_ENDPOINT": "https://models.inference.ai.azure.com", "GITHUB_MODELS_TOKEN": "SEU_TOKEN" }, "Host": { "LocalHttpPort": 7071, "CORS": "*", "CORSCredential": true } } 5. Agora, vĂĄ atĂ© o arquivo: app/services/openaiService.ts e faça as seguintes alteraçÔes: Importação do Cliente OpenAI: substitua a importação do AzureOpenAI para OpenAI: import { OpenAI } from "openai"; Renomeação da Classe e Cliente: Substituição da importação do AzureOpenAIService foi renomeada para GitHubModelsService. E, a instĂąncia do cliente AzureOpenAI foi renomeada para OpenAI. Adição de um modelName padrĂŁo (gpt-4o) para ser usado nas requisiçÔes de criação de completions. PorĂ©m aqui poderia ser quaisquer outro modelName de sua escolha, como o Llama 3.1 ou o Mistral 7B. class GitHubModelsService { private client: OpenAI; private readonly toneGuidelines: ToneGuidelines; private readonly modelName: string = "gpt-4o"; (...) Configuração do Cliente: As variĂĄveis de ambiente especĂ­ficas do Azure (AZURE_OPENAI_API_KEY, AZURE_OPENAI_ENDPOINT, etc.) foram substituĂ­das por variĂĄveis de ambiente do GitHub (GITHUB_TOKEN e GITHUB_MODELS_ENDPOINT). this.client = new OpenAI({ baseURL: process.env.GITHUB_MODELS_ENDPOINT || "https://models.inference.ai.azure.com", apiKey: process.env.GITHUB_TOKEN, }); Exportação da InstĂąncia da Classe: A instĂąncia exportada da classe foi renomeada de azureOpenAIService para GitHubModelsService. export const azureOpenAIService = new GitHubModelsService(); E pronto! Agora vocĂȘ jĂĄ pode executar o projeto localmente e testar as funcionalidades de IA com o GitHub Models. Se desejar saber mais detalhes do que foi alterado deixei disponĂ­vel uma branch chamada feat/github-models-usage com todas as alteraçÔes feitas. VocĂȘ pode comparar com a branch main para ver o que foi modificado. 6. Por fim, para executar o projeto, basta rodar o seguinte comando na raiz do projeto: npm run build:all npm run dev Agora, vocĂȘ pode acessar a aplicação na URL: http://localhost:5173/ e começar a criar seus microblogs com a ajuda do GitHub Models! ConclusĂŁo O GitHub Models Ă© uma excelente alternativa para quem quer experimentar IA sem custos. Ele permite testar modelos avançados como GPT-4o sem precisar pagar por APIs ou configurar infraestrutura complexa. No caso do Microblog AI Remix, conseguimos substituir a API paga do Azure OpenAI pelo GitHub Models com mĂ­nimas alteraçÔes no cĂłdigo, tornando a aplicação acessĂ­vel para qualquer desenvolvedor. Claro que, novamente, caso vocĂȘ queira colocar em produção, o GitHub Models sugere a migração para um endpoint pago do Azure. Mas, para fins de prototipagem e aprendizado, ele Ă© uma ferramenta poderosa e gratuita. Se vocĂȘ gostou deste artigo, nĂŁo esqueça de testar o Microblog AI Remix e dar uma ⭐ no repositĂłrio! Queremos saber sua opiniĂŁo sobre essa abordagem e como vocĂȘ pretende utilizar IA nos seus projetos. Agora Ă© sua vez: clone o repositĂłrio, teste as mudanças e explore o GitHub Models gratuitamente. Bora codar com IA sem gastar nada! 💾💾💾 View the full article
  4. On Jan 29, 2025, we introduced DeepSeek R1 in the model catalog in Azure AI Foundry, bringing one of the popular open-weight models to developers and enterprises looking for high-performance AI capabilities. At launch, we made DeepSeek R1 available without pricing as we gathered insights on real-world usage and performance. Now, we’re excited to share that the model has better latency and throughput along with competitive pricing, making it easier to integrate DeepSeek R1 into your applications while keeping costs predictable. Scaling to Meet Demand: Performance Optimizations in Action The high adoption brought a few challenges—early users experienced capacity constraints and performance fluctuations due to the surge in demand. Our product and engineering teams moved quickly, optimizing infrastructure and fine-tuning system performance. You can expect higher rate limits and improved response times starting from Feb 26, 2025. We continue rolling out further improvements to meet customers’ expectations. You can learn more about rate limits in the Azure AI model inference quotas and limits documentation page. Thanks to these improvements, we’ve significantly increased model efficiency, reduced latency, and improved throughput, ensuring a smoother experience for all users. DeepSeek R1 Pricing With these optimizations, DeepSeek R1 now delivers a good price-to-performance ratio. Whether you’re building chatbots, document summarization tools, or AI-driven search experiences, you get a high-quality model at a competitive cost, making it easier to scale AI workloads without breaking the bank. What’s Next? We’re committed to continuously improving DeepSeek R1’s availability as we scale. If you haven’t tried it yet, now is the perfect time to explore how DeepSeek R1 on Azure AI Foundry can power your AI applications with state-of-the-art capabilities. Start using DeepSeek R1 today in https://ai.azure.com/ View the full article
  5. The Azure Connect feature pack for release for SQL Server 2017 RTM is now available for download at the Microsoft Downloads site. Please note that registration is no longer required to download Cumulative updates. To learn more about the release or servicing model, please visit: Azure Connect feature pack for SQL Server 2017 RTM (KB5050533) Article: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/sqlserver-2017/azureconnect Starting with SQL Server 2017, we adopted a new modern servicing model. Please refer to our blog for more details on Modern Servicing Model for SQL Server MicrosoftÂź SQL ServerÂź 2017 RTM Latest Cumulative Update: https://www.microsoft.com/download/details.aspx?familyid=56128 Update Center for Microsoft SQL Server: https://learn.microsoft.com/en-us/troubleshoot/sql/releases/download-and-install-latest-updates View the full article
  6. Are you ready to dive into the transformative world of Artificial Intelligence but unsure where to start? You landed at the right page. Welcome! Whether you’re just getting started or have a bit of experience under your belt, this read will provide you with the foundational knowledge needed to unlock the potential of Azure AI Foundry. Azure AI Foundry offers a powerful suite of tools and Machine Learning Models. Let’s start harnessing its capabilities as an initial step to create innovative AI solutions that can revolutionize your projects or business. As a Microsoft Technical Trainer, delivering classes on AI and Data for the past several years, I have witnessed the significant impact AI has had across various industries. My objective here is to equip you with the foundational knowledge and confidence necessary to embark on your AI journey. Azure AI Foundry (formerly Azure AI Studio) is a suite of tools that makes artificial intelligence accessible to everyone. It allows users to build, deploy, and manage AI solutions easily. Azure AI Foundry can be leveraged to address real-world business challenges and foster innovation. For instance, Azure AI Foundry can enhance predictive maintenance in manufacturing by analyzing sensor data to foresee equipment failures to minimize downtime and costs. In retail, it can perform customer sentiment analysis across social media, reviews, and surveys, offering insights into customer satisfaction and areas for improvement. For more ways that Azure AI Foundry can help your organization, see customer stories here: Azure AI Foundry - Generative AI Development Hub and here: Need inspirations? Real AI Apps stories by Azure customers to help you get started With Azure AI Foundry, you can explore and develop various AI models and services tailored to your goals. The platform supports scalability, enabling proof of concepts to become full production applications effortlessly. It also has support for continuous monitoring and refinement ensuring long-term success. Here's a brief overview of AI Foundry's main architectural components and their integration. Azure AI Foundry architecture - Azure AI Foundry At the top level, AI Foundry provides access to the following resources: Management Center: Used to manage AI Foundry resources like hubs, projects, connected resources, and deployments. It is a part of the Azure AI Foundry portal that streamlines governance and management activities. In the management center, you can view and manage: Projects and resources Quotas and usage metrics Govern access and permissions For more information see: Management center overview - Azure AI Foundry AI Foundry Hub: The main top-level resource in the AI Foundry portal, offering a centralized method for managing security, connectivity, and computing resources across playgrounds and projects. Once a hub is established, developers can create projects from it and access shared resources like storage accounts, key vaults, databases and others without requiring continuous assistance from an IT administrator. The hub uses the Azure Machine Learning service and this Azure Provider, Microsoft.MachineLearningServices/workspaces and the type is hub. It offers: Security features, including a managed network for projects and model endpoints. Computer resources for development, fine-tuning, open-source, and serverless model deployments. Connections to other Azure services, such as Azure OpenAI and Azure AI Search. An Azure storage account for data and artifacts. AI Foundry Project: A project is part of the hub. Projects help organize work, save state across tools like prompt flow and enable collaboration. You can share files and data source connections within a project. Hubs support multiple projects and users. Projects manage billing, access, and provide data isolation, using dedicated storage containers to securely share files among project members. Once you have a project, you can connect to it from your code. You can explore models and capabilities before creating a project, but once you're ready to build, customize, test, and operationalize, a project is where you'll want to be. The Azure provider for a project is Microsoft.MachineLearningServices/workspaces, and the type is Project. The project offers: Reusable assets like datasets, models, and indexes. A container for uploading data within the hub's storage. Private data access for project members. Model deployments from the catalog and adjusted model endpoints. Connections: AI Foundry hubs and projects use connections to access resources from other services, such as an Azure Storage Account, Azure OpenAI, or other Azure AI services. Once you have setup the Azure Foundry Hub, Project and Connection, its time to start exploring and deploying the models. Model Catalog: Available in Azure AI Foundry portal to discover and use a wide range of models for building generative AI applications. The model catalog features hundreds of models across model providers such as Azure OpenAI Service, Meta, NVIDIA, Hugging Face, DeepSeek, and of course the models trained by Microsoft like Phi. Azure AI Model Catalog – Foundation Models You can search and discover models that meet your needs. The model catalog also offers the model performance benchmark metrics which can be accessed by using Compare Models feature or from the model card Benchmark tab. Models need to be deployed to make them available for receiving inference requests. Azure AI Foundry offers a comprehensive suite of deployment options for those models depending on your needs and model requirements. Prompt Flow: This feature can be used to generate, customize, or run a flow. A flow is an executable instruction set that can implement the AI logic. Flows can be created or run via multiple tools, like a prebuilt canvas, LangChain, etc. Iterations of a flow can be saved as assets; once deployed a flow becomes an API. Prompt flow in Azure AI Foundry portal - Azure AI Foundry A prompt is sent to a model, consisting of the user input, system message, and any examples. User input is text submitted in the chat window. System message is a set of instructions to the model scoping its behaviors and functionality. Evaluators: Helpful tools to assess the frequency and severity of content risks or undesirable behavior in AI responses. Performing iterative, systematic evaluations with the right evaluators can help teams measure and address potential response quality, safety, or security concerns throughout the AI development lifecycle. You can explore Azure AI Foundry benchmarks to evaluate and compare models on publicly available datasets. Deploy: Remember to deploy your model setup. Deployments are hosted within an endpoint and can receive data from clients and send responses back in real-time. You can invoke the endpoint for real-time inference for chat, copilot or another generative AI application. AI Foundry has all the needed capabilities including content filters, Responsible AI factors and Security. More about these aspects in my next article. Playgrounds: Be sure to test your model using the Playgrounds on Azure AI Foundry. The portal offers a chat playground for deploying and interacting with AI chat models, allowing you to refine them before production deployment. Hear and speak with chat models in the Azure AI Foundry portal chat playground - Azure AI Foundry Now that you have the foundational knowledge, I encourage you to begin using Azure AI Foundry by following the resources and guidelines provided. Ready to get started with Azure AI Foundry? Here are some tutorials to guide you: Use the chat playground in Azure AI Foundry portal Tutorial: Deploy an enterprise chat web app in the Azure AI Foundry portal playground Explore, learn, and transform your ideas into reality with Azure AI Foundry. Stay tuned for more! Happy Learning! View the full article
  7. ë‹€ìšŽëĄœë“œ: | Windows: x64 Arm64 | Mac: Universal Intel Silicon | Linux: deb rpm tarball Arm snap Visual Studio Code 2025년 2월 ëŠŽëŠŹìŠ€ì— 였신 êČƒì„ 환영합니닀. 읎ëȈ ëČ„ì „ì—ì„œëŠ” 닀양한 Ʞ늄을 추가했윌며, íŠč히 GitHub CopilotêłŒ ꎀ렚한 ìŁŒìš” 업데읎튞는 ë‹€ìŒêłŒ 같슔니닀: Next Edit Suggestions (ëŻžëŠŹëłŽêž°) - Copilot읎 ë‹€ìŒìœŒëĄœ 수정할 가늄성읎 높은 윔드넌 ì˜ˆìžĄí•©ë‹ˆë‹€. Agent ëȘšë“œ (ëŻžëŠŹëłŽêž°) - Copilot읎 ìžë™ìœŒëĄœ 작업을 ì™„ëŁŒí•©ë‹ˆë‹€. Copilot Edits의 녞튞북 지원 - 녞튞북 파음의 펞집을 쉜êȌ 할 수 있슔니닀. 윔드 êČ€ìƒ‰ - Copilot읎 채팅 í”„ëĄŹí”„íŠžì™€ ꎀ렚된 파음을 êČ€ìƒ‰í•©ë‹ˆë‹€. ì‚Źìš©ìž 지정 지ìčš GA - Copilot을 ì‚Źìš©ìžì˜ ìš”ê”Źì— 맞êȌ 섀정할 수 있슔니닀. 씜신 ëŠŽëŠŹìŠ€ 녞튞넌 ì˜šëŒìžìœŒëĄœ 볎렀멎 업데읎튞 페읎지넌 ë°©ëŹží•˜ì„žìš”. Insiders ëČ„ì „ì„ ì‚Źìš©í•˜ì—Ź ìƒˆëĄœìšŽ Ʞ늄을 믞늏 ìČŽí—˜í•˜êł  싶닀멎, Insiders ëčŒë“œë„Œ ì„€ìč˜í•˜ì„žìš”. ê·žëŸŹë©Ž 신Ʞ늄읎 ë‚˜ì˜Ź 때마닀 바로 ì‚Źìš© 가늄합니닀. GitHub Copilot Copilot Ʞ늄은 ìŒë°˜ì ìœŒëĄœ 싀험적 Ʞ늄, ëŻžëŠŹëłŽêž°, 안정적읞 ë‹šêł„ëĄœ 나누얎집니닀. ë‹šêł„ì„€ëȘ…싀험적 Ʞ늄(Experimental)개발 쀑읎며 음반 ì‚Źìš©ìžê°€ ì‚Źìš©í•˜êž°ì—ëŠ” 아직 쀀ëč„되지 않은 Ʞ늄입니닀.ëŻžëŠŹëłŽêž°(Preview)아직 개선 쀑읎지만 ì‚Źìš© 가늄하며, 플드백을 í•„ìš”ëĄœ 합니닀.안정적(Stable)음반 ì‚Źìš©ìžë„Œ 위한 ì•ˆì •ì ìœŒëĄœ ì‚Źìš© 가늄한 Ʞ늄입니닀. Copilot Edits Agent ëȘšë“œ 개선 (싀험적 Ʞ늄) ì§€ë‚œë‹Ź, VS Code Insiders에서 Copilot Edits의 Agent ëȘšë“œë„Œ 도입했슔니닀. 읎 ëȘšë“œì—ì„œ Copilot은 ìžë™ìœŒëĄœ ì›ŒíŹìŠ€íŽ˜ìŽìŠ€ë„Œ êČ€ìƒ‰í•˜ì—Ź ꎀ렚 컚텍슀튞넌 ì°Ÿêł , 파음을 수정하며, 였넘넌 í™•ìží•˜êł , 터믾널 ëȘ…ë č을 ì‹€í–‰í•˜ì—Ź 작업을 ì™„ëŁŒí•©ë‹ˆë‹€. ì°žêł : í˜„ìžŹ VS Code Insiders에서 ì‚Źìš© 가늄하며, ì ì§„ì ìœŒëĄœ VS Code Stable에서도 ì œêł”í•  예정입니닀. 읎ëȈ ë‹Źì—ëŠ” UX 개선읎 ìŽëŁšì–ŽìĄŒìŠ”ë‹ˆë‹€: 터믾널 ëȘ…ë čì–Žë„Œ ìžëŒìžìœŒëĄœ í‘œì‹œí•˜ì—Ź 싀행된 ëȘ…ë č을 쉜êȌ 추적할 수 있슔니닀. 채팅 응닔에서 제안받은 터믾널 ëȘ…ë čì–Žë„Œ 싀행 전에 직접 수정할 수 있슔니닀. Ctrl+Enter 킀넌 ì‚Źìš©í•Ž 터믾널 ëȘ…ë č을 ìŠč읞할 수 있슔니닀. Agent ëȘšë“œëŠ” 윔드ëČ ìŽìŠ€ë„Œ ìžë™ìœŒëĄœ êČ€ìƒ‰í•˜ì—Ź ꎀ렚 컚텍슀튞넌 찟슔니닀. 메시지넌 í™•ìž„í•˜ì—Ź ì–Žë–€ êČ€ìƒ‰ìŽ 수행되었는지 확읞할 수 있슔니닀. 또한, 프롬프튾 및 Agent ëȘšë“œì˜ 동작을 개선했슔니닀: 채팅에서 싀행한 ë˜ëŒëŠŹêž°(Undo) 및 닀시 싀행(Redo) Ʞ늄읎 읎제 마지막 파음 펞집만 ë˜ëŒëŠŹê±°ë‚˜ 닀시 싀행할 수 ìžˆë„ëĄ 개선했슔니닀. 읎넌 톔핎 ëȘšëžìŽ 수행한 íŠč정 ë‹šêł„ë„Œ ë˜ëŒëŠŹë©Žì„œë„ 전ìČŽ 채팅 응닔을 쎈Ʞ화할 필요가 없슔니닀. Agent ëȘšë“œëŠ” 읎제 ì‚Źìš©ìžì˜ ëčŒë“œ 작업(tasks) 을 ìžë™ìœŒëĄœ 싀행하거나 요ìČ­ 시 싀행할 수 있슔니닀. ëȘšëžìŽ 원ìč˜ ì•ŠëŠ” 작업을 싀행하는 êČœìš°, ⚙github.copilot.chat.agent.runTasks 섀정을 ëč„í™œì„±í™”í•˜ì—Ź 읎넌 방지할 수 있슔니닀. Copilot Edits Agent ëȘšë“œì— 대핮 더 ì•Œì•„ëłŽê±°ë‚˜ Agent ëȘšë“œ êł”ì§€ ëž”ëĄœê·žë„Œ 확읞하섞요. ì°žêł : Copilot Business 또는 Enterprise ì‚Źìš©ìžì˜ êČœìš°, ìĄ°ì§ì˜ êŽ€ëŠŹìžê°€ Editor Preview Featuresë„Œ 활성화핎알 Agent ëȘšë“œë„Œ ì‚Źìš©í•  수 있슔니닀. Copilot Edits의 녞튞북 지원 (ëŻžëŠŹëłŽêž°) 읎제 VS Code Insiders에서 Copilot Editsë„Œ ì‚Źìš©í•˜ì—Ź 녞튞북 파음을 펞집할 수 있슔니닀. 윔드 파음을 수정하는 êČƒêłŒ 동음한 직ꎀ적읞 êČœí—˜ì„ ì œêł”í•©ë‹ˆë‹€. 녞튞북을 ìČ˜ìŒë¶€í„° 생성하거나 ì—ŹëŸŹ 셀의 낎용을 ìˆ˜ì •í•˜êł , 셀을 삜입 및 삭제하며, 셀 유형을 변êČœí•  수도 있슔니닀. 데읎터 ì‚ŹìŽì–žìŠ€ ꎀ렚 또는 ëŹžì„œí™” ꎀ렚 녞튞북 작업을 할 때 원활한 ì›ŒíŹí”ŒëĄœìš°ë„Œ ì œêł”í•©ë‹ˆë‹€. ì°žêł : 읎 Ʞ늄은 í˜„ìžŹ VS Code Insiders에서 GitHub Copilot Chat의 í”„ëŠŹëŠŽëŠŹìŠˆ ëČ„ì „ì„ 톔핎서만 ì‚Źìš©í•  수 있슔니닀. VS Code Stable에서 정식 ì œêł”í•˜êž° 전êčŒì§€ ì§€ì†ì ìœŒëĄœ 개선할 예정입니닀. 에디터 톔합 개선 Copilot Edits의 윔드 및 녞튞북 펞집Ʞ와의 톔합을 더욱 햄상시쌰슔니닀: 변êČœ ì‚Źí•­ì„ 적용하는 동안 더 읎상 자동 ìŠ€íŹëĄ€ìŽ 발생하지 않슔니닀. ë·°íŹìžíŠžë„Œ ìœ ì§€í•˜ì—Ź 변êČœ ì‚Źí•­ì„ 볎닀 쉜êȌ 확읞할 수 있슔니닀. 펞집 êȀ토 작업ëȘ…을 변êČœí•˜ì˜€ìŠ”ë‹ˆë‹€. **"Accept" → "Keep", "Discard" → "Undo"**로 변êČœí•šìœŒëĄœìš 동작을 더 ëȘ…확하êȌ 읎핎할 수 있슔니닀. Copilot Edits의 변êČœ ì‚Źí•­ì€ 슉시 적용 및 ì €ìž„ë˜ëŻ€ëĄœ ì‚Źìš©ìžê°€ 직접 유지(Keep)하거나 싀행 췚소(Undo)할 수 있슔니닀. 파음을 유지(Keep)하거나 싀행 췚소(Undo)한 후 닀음 파음을 ìžë™ìœŒëĄœ 표시합니닀. 아래 ëč„디였는 Copilot Edits에서 변êČœì‚Źí•­ìŽ 생êČŒì„ 때 읎넌 ì ìš©ì‹œí‚€êł  저임하는 êłŒì •ì„ ëłŽì—Źì€ë‹ˆë‹€. 읎 띌읎람 ëŻžëŠŹëłŽêž° 업데읎튞넌 톔핎 ì‚Źìš©ìžëŠ” 변êČœì‚Źí•­ì„ "Keep"할 수 있슔니닀. ë˜ëŒëŠŹêž° 같은 êłŒì •ë„ ì—Źì „ížˆ 가늄합니닀. UI 개선 Copilot Edits와 Copilot Chat을 톔합하Ʞ 위한 쀀ëč„ êłŒì •ìœŒëĄœ UIë„Œ ìƒˆëĄ­êȌ 변êČœí–ˆìŠ”ë‹ˆë‹€. ìČšë¶€í•œ 파음(아직 ì „ì†Ąí•˜ì§€ 않은 파음)읎 읎제 음반적읞 채팅 ìȚ부 파음ìČ˜ëŸŒ ëłŽìž…ë‹ˆë‹€. AI가 수정한 파음만 변êČœ 파음 ëȘ©ëĄì— 추가하며, 채팅 ìž…ë „ 필드 위에 표시합니닀. ⚙chat.renderRelatedFiles 섀정을 활성화하멎 ꎀ렚 파음에 대한 제안을 받을 수 있슔니닀. ꎀ렚 파음 제안은 채팅 ìȚ부 파음 아래에 ëłŽìž…ë‹ˆë‹€. Copilot Edits 제한 í•Žì œ 읎전에는 Copilot Edits에서 씜대 10개의 파음만 í”„ëĄŹí”„íŠžì— ìȚ부할 수 있었지만, 읎ëȈ ëŠŽëŠŹìŠ€ì—ì„œ 읎 제한을 제거했슔니닀. 또한 큎띌읎얞튞 ìžĄì˜ 요ìČ­ 속도 제한(10분ë‹č 14회 요ìČ­)도 핎제했슔니닀. 당, 서ëȄ ìžĄ ì‚Źìš©ëŸ‰ 제한은 ì—Źì „ížˆ 적용합니닀. ì‚Źìš©ìž 지정 지ìčš êž°ëŠ„ GA Settings: ⚙github.copilot.chat.codeGeneration.useInstructionFiles 읎제 GitHub Copilot을 팀의 작업 방식에 맞êȌ ìĄ°ì •í•  수 있는 "ì‚Źìš©ìž 지정 지ìčš" Ʞ늄을 ì •ì‹ìœŒëĄœ ì œêł”í•©ë‹ˆë‹€. ì›ŒíŹìŠ€íŽ˜ìŽìŠ€ì— .github/copilot-instructions.md 파음을 ìƒì„±í•˜ì—Ź Markdown í˜•ì‹ìœŒëĄœ ìš”ê”Ź ì‚Źí•­ì„ 작성하멎, Copilot읎 읎에 따띌 윔드 및 채팅 응닔을 생성합니닀. 읎 Ʞ늄을 ì‚Źìš©í•˜ë €ë©Ž ⚙github.copilot.chat.codeGeneration.useInstructionFiles 섀정을 활성화핎알 합니닀. 자섞한 낎용은 Copilot ì‚Źìš©ìž 지정 가읎드넌 ì°žêł í•˜ì„žìš”. 볎닀 원활한 읞슝 흐멄 GitHub 저임소에서 윔드넌 êŽ€ëŠŹí•˜ëŠ” êČœìš°, êł êž‰ 윔드 êČ€ìƒ‰, github 채팅 Ʞ늄 등 닀양한 Ʞ늄을 활용할 수 있슔니닀. ê·žëŸŹë‚˜ 프띌읎ëč— GitHub 저임소넌 ì‚Źìš©í•  êČœìš°, VS Code가 GitHubêłŒ 상혞 작용하렀멎 권한읎 필요합니닀. êž°ìĄŽì—ëŠ” ëȘšë‹Ź ë‹€ìŽì–ŒëĄœê·žë„Œ 톔핎 읞슝 요ìČ­ì„ 표시했었지만, 읎제 채팅 찜에서 더욱 자연슀럜êȌ 진행할 수 있슔니닀. ìƒˆëĄœìšŽ 읞슝 흐늄은 ë‹€ìŒêłŒ 같은 Ʞ늄을 ì œêł”í•©ë‹ˆë‹€: Grant (ìŠč읞): êž°ìĄŽì˜ 읞슝 ë°©ì‹êłŒ 동음하êȌ ëȘšë‹Ź 찜을 톔핎 읞슝을 ì™„ëŁŒí•©ë‹ˆë‹€. Not Now (지ꞈ 하지 않음): í˜„ìžŹ VS Code 찜을 ìą…ëŁŒí•  때êčŒì§€ 요ìČ­ì„ 닀시 표시하지 않슔니닀. 당, github 같은 필수 Ʞ늄읎 있을 êČœìš° ì˜ˆì™žì ìœŒëĄœ 닀시 요ìȭ할 수 있슔니닀. Never Ask Again (닀시 ëŹ»ì§€ ì•Šêž°): ⚙github.copilot.advanced.authPermissions 섀정을 톔핎 ì˜ê”Źì ìœŒëĄœ 읞슝 요ìČ­ì„ 찚닚합니닀. 읎 섀정은 VS Code의 Copilot 읞슝 방식에만 적용하며, Copilot 서ëč„슀 자ìČŽì˜ GitHub 저임소 ì ‘ê·Œ 권한을 변êČœí•˜ì§€ 않슔니닀. Copilot읎 접귌할 수 있는 윘텐잠넌 ìĄ°ì •í•˜ë €ë©Ž 윘텐잠 제왞 섀정을 ì°žêł í•˜ì„žìš”. Copilot Chat에서 윔드ëČ ìŽìŠ€ êČ€ìƒ‰ Ʞ늄 햄상 Settings: ⚙github.copilot.chat.codesearch.enabled Copilot Chat에서 #codebase 태귞넌 추가하멎, Copilot읎 ì›ŒíŹìŠ€íŽ˜ìŽìŠ€ 낎에서 ꎀ렚 윔드넌 찟아쀍니닀. 읎ëȈ 업데읎튞에서는 텍슀튞 êČ€ìƒ‰ 및 파음 êČ€ìƒ‰ Ʞ늄을 ì¶”ê°€í•˜ì—Ź 더 많은 컚텍슀튞넌 ì œêł”í•  수 있슔니닀. 읎 Ʞ늄을 활성화하렀멎 ⚙github.copilot.chat.codesearch.enabled 섀정을 ONìœŒëĄœ 변êČœí•˜ì„žìš”. 아래는 전ìČŽ ë„ê”Ź ëȘ©ëĄìž…니닀: 임ëȠ딩 êž°ë°˜ ì˜ëŻžëĄ ì  êČ€ìƒ‰ 텍슀튞 êČ€ìƒ‰ 파음 êČ€ìƒ‰ Git에서 변êČœëœ 파음 ìĄ°íšŒ í”„ëĄœì íŠž ê”ŹìĄ° 분석 파음 읜Ʞ 디렉터멬 읜Ʞ ì›ŒíŹìŠ€íŽ˜ìŽìŠ€ ì‹ŹëłŒ êČ€ìƒ‰ ëŹžì œ(Problems) 팚널에서 채팅 프롬프튾로 ëŹžì œ ìȚ부 윔드 였넘 또는 Ʞ타 ëŹžì œë„Œ 핮êȰ하는 êłŒì •ì—ì„œ ëŹžì œ 팚널의 였넘넌 Copilot ChatìœŒëĄœ 바로 ìȚ부할 수 있슔니닀. ë°©ëȕ: ëŹžì œ 팚널에서 항ëȘ©ì„ 채팅 ì°œìœŒëĄœ 드래귞 ì•€ 드롭 채팅 í”„ëĄŹí”„íŠžì— #problems ìž…ë „ íŽëŠœëłŽë“œ 📎 ëČ„íŠŒì„ íŽëŠ­í•˜ì—Ź ëŹžì œ 추가 íŠč정 ëŹžì œë§Œ 추가하거나, 파음 낮 ëȘšë“  ëŹžì œë„Œ ìČšë¶€í•˜ê±°ë‚˜, í”„ëĄœì íŠž 전ìČŽì˜ ëŹžì œë„Œ 추가하는 êȃ도 가늄합니닀. 폎더넌 채팅 ì»ší…ìŠ€íŠžëĄœ ìȚ부 êž°ìĄŽì—ëŠ” 탐색Ʞ에서 폎더넌 드래귞 ì•€ ë“œëĄ­í•˜ì—Ź 채팅 ì»ší…ìŠ€íŠžëĄœ 추가할 수 있었슔니닀. 읎제는 íŽëŠœëłŽë“œ 📎 아읎윘을 큎늭하거나, #folder: 뒀에 폮더 읎늄을 입렄하는 ë°©ì‹ìœŒëĄœë„ 폎더넌 ìȚ부할 수 있슔니닀. Next Edit Suggestions (ëŻžëŠŹëłŽêž°) ì ‘êž° ëȘšë“œ 추가 Settings: ⚙github.copilot.nextEditSuggestions.enabled ⚙editor.inlineSuggest.edits.showCollapsed 읎ëȈ 업데읎튞에서는 Next Edit Suggestions (NES)의 ì ‘êž° ëȘšë“œë„Œ 추가했슔니닀. 읎 ëȘšë“œë„Œ 활성화하멎, 왌ìȘœ 펞집Ʞ 마진에 NES 제안 표시Ʞ만 ëłŽìŽêł  싀제 윔드 제안은 Tab 킀넌 눌러 읎동할 때만 ëłŽìž…ë‹ˆë‹€. 읎얎지는 닀음 제안은 ìžë™ìœŒëĄœ 나타나며, ì‚Źìš©ìžê°€ íŠč정 제안을 거절할 때êčŒì§€ êł„ì†í•©ë‹ˆë‹€. êž°ëłžì ìœŒëĄœ 읎 ëȘšë“œëŠ” ëč„활성화되얎 있윌며, ⚙editor.inlineSuggest.edits.showCollapsed 섀정을 í™œì„±í™”í•˜ì—Ź ì‚Źìš©í•  수 있슔니닀. NES 마진 메뉎에서 ê°œëł„ì ìœŒëĄœ 활성화 또는 ëč„활성화할 수도 있슔니닀. Copilot 윔드 자동 완성 ëȘšëž 변êČœ 읎제 Copilot Chat 및 Copilot Edits의 ì–žì–Ž ëȘšëžì„ 변êČœí•  수 있을 뿐만 아니띌, 읞띌읞 자동 완성에도 적용할 ëȘšëžì„ 선택할 수 있슔니닀. ì‚Źìš© ë°©ëȕ: ëȘ…ë č 팔레튾(Command Palette)에서 Change Completions Model ëȘ…ë č 싀행 Copilot 메뉎의 Configure Code Completions 항ëȘ© 선택 ì°žêł : ì‚Źìš© 가늄한 ëȘšëž ëȘ©ëĄì€ 시간읎 지나멎서 바뀔 수 있슔니닀. Copilot Business 또는 Enterprise ì‚Źìš©ìžì˜ êČœìš°, ìĄ°ì§ êŽ€ëŠŹìžê°€ Copilot 정책 섀정에서 íŠč정 ëȘšëžì„ 활성화핎알 합니닀. Copilot ëȘšëž 가용성 확대 읎ëȈ ëŠŽëŠŹìŠ€ì—ì„œëŠ” 더 많은 AI ëȘšëžì„ 추가했슔니닀. 읎제 VS Code 및 github.com의 Copilot 채팅에서 닀음 ëȘšëžì„ 선택할 수 있슔니닀: GPT-4.5 (ëŻžëŠŹëłŽêž°): OpenAI의 씜신 ëȘšëžëĄœ 직ꎀ적읞 읎핎렄, Ꞁ쓰Ʞ 슀타음, 폭넓은 지식읎 햄상되었슔니닀. Copilot Enterprise ì‚Źìš©ìžë„Œ 위핎 ì œêł”ë©ë‹ˆë‹€. Claude 3.7 Sonnet (ëŻžëŠŹëłŽêž°): ëȘšë“  ìœ ëŁŒ Copilot ì‚Źìš©ìžê°€ ì‚Źìš© 가늄하며, 에읎전튞 êž°ë°˜ 작업에서 ë›°ì–Žë‚œ 성늄을 ëłŽìž…ë‹ˆë‹€. 쎈Ʞ 테슀튞에서 íŠč히 자동 윔드 수정 및 ëŠŹíŒ©í† ë§ 성늄읎 햄상된 êČƒìœŒëĄœ 확읞되었슔니닀. 자섞한 낎용은 닀음 GitHub ëž”ëĄœê·žë„Œ ì°žêł í•˜ì„žìš”: GPT-4.5 ëȘšëž 소개 Claude 3.7 Sonnet ëȘšëž 소개 Copilot Vision (ëŻžëŠŹëłŽêž°) 읎ëȈ 업데읎튞에서는 Copilot Vision 지원을 추가했슔니닀. 읎제 Copilot 채팅에 ìŽëŻžì§€ë„Œ ìČšë¶€í•˜êł  읎넌 활용할 수 있슔니닀. 가늄한 작업: UI ëȘ©ì—…을 ìČšë¶€í•˜ì—Ź HTML/CSS 윔드 생성 였넘 발생 시 VS Code의 ìŠ€íŹëŠ°ìƒ·ì„ ìČšë¶€í•˜êł  핮êČ° ë°©ëȕ ëŹžì˜ ëŹžì„œ 및 ë§ˆíŹë‹€ìšŽ 낮 ìŽëŻžì§€ì˜ 대ìČŽ 텍슀튞 자동 생성 ìŽëŻžì§€ë„Œ 닀양한 ë°©ëČ•ìœŒëĄœ ìȚ부할 수 있슔니닀: OS에서 또는 íƒìƒ‰êž°ëĄœë¶€í„° 드래귞&드롭 방식 íŽëŠœëłŽë“œì—ì„œ ë¶™ì—Źë„Łêž° 하는 방식 VS Code 윈도우에 ìŠ€íŹëŠ°ìƒ· ìČšë¶€í•˜ëŠ” 방식 í˜„ìžŹ GPT 4o만 ìŽëŻžì§€ ìČ˜ëŠŹë„Œ 지원하며, Claude 3.5 Sonnet 및 Gemini 2.0 Flash 지원읎 êł§ 추가될 예정입니닀. 또한 지원 가늄한 ìŽëŻžì§€ 파음 형식은 JPEG/JPG, PNG, GIF, WEBP입니닀. Copilot 상태 개요 (싀험적 Ʞ늄) Settings: ⚙chat.experimental.statusIndicator.enabled 읎ëȈ ëŠŽëŠŹìŠ€ì—ì„œëŠ” ìƒˆëĄœìšŽ Copilot 상태 개요Ʞ늄을 ì‹€í—˜ì ìœŒëĄœ 추가했슔니닀. 읎 Ʞ늄은 í˜„ìžŹ Copilot 상태 및 ìŁŒìš” 펞집Ʞ 섀정을 한눈에 ëłŒ 수 ìžˆë„ëĄ 도와쀍니닀. 상태 개요에서 확읞할 수 있는 ì •ëłŽ: Copilot Free ì‚Źìš©ìžì˜ êČœìš°, ì‚Źìš©ëŸ‰ ì •ëłŽ 에디터 ꎀ렚 섀정 ì •ëłŽ (예: 윔드 자동 완성 상태) 유용한 Copilot 닚축킀 및 Ʞ늄 ë°”ëĄœê°€êž° Copilot 상태 개요는 VS Code 하당 상태 표시쀄에서 Copilot 아읎윘을 큎늭하멎 확읞할 수 있슔니닀. ⚙chat.experimental.statusIndicator.enabled 섀정을 í™œì„±í™”í•˜ì—Ź 읎 Ʞ늄을 ì‚Źìš©í•˜ì„žìš”. TypeScript 읞띌읞 완성 컚텍슀튞 확임 (싀험적 Ʞ늄) Settings: ⚙chat.languageContext.typescript.enabled 읎제 TypeScript 윔드에서 Copilot의 읞띌읞 윔드 ì™„ì„±êłŒ /fix ëȘ…ë čì–Žê°€ 더 풍부한 컚텍슀튞넌 ì œêł”í•  수 ìžˆë„ëĄ 개선되었슔니닀. 읎 Ʞ늄은 í˜„ìžŹ VS Code Insiders에서만 ì‚Źìš© 가늄하며, ⚙chat.languageContext.typescript.enabled 섀정을 활성화핎알 합니닀. Pull Request 제ëȘ© 및 ì„€ëȘ…을 위한 ì‚Źìš©ìž 지정 지ìčš ì§€ì› Settings: ⚙github.copilot.chat.pullRequestDescriptionGeneration.instructions 읎제 Pull Request 제ëȘ© 및 ì„€ëȘ…을 자동 생성할 때 ì‚Źìš©ìž 지정 지ìčšì„ 적용할 수 있슔니닀. 읎 섀정을 ì‚Źìš©í•˜ë©Ž: ì›ŒíŹìŠ€íŽ˜ìŽìŠ€ 낮 íŠč정 파음을 ê°€ìŽë“œëŒìžìœŒëĄœ 지정할 수 있슔니닀. 섀정 파음에서 직접 지ìčšì„ ì¶”ê°€í•˜ì—Ź 음ꎀ된 PR 제ëȘ© 및 ì„€ëȘ…을 생성할 수 있슔니닀. 예제: { "github.copilot.chat.pullRequestDescriptionGeneration.instructions": [ { "text": "ëȘšë“  PR 제ëȘ©ì„ 읎ëȘšì§€ëĄœ 시작하섞요." } ] } PR 제ëȘ© 및 ì„€ëȘ… 자동 생성을 ì‚Źìš©í•˜ë €ë©Ž GitHub Pull Requests 확임읎 필요합니닀. 접귌성 개선 Copilot Edits 접귌성 햄상 읎ëȈ 업데읎튞에서는 Copilot Edits의 접귌성을 큏êȌ 햄상시쌰슔니닀: 수정된 파음 및 변êČœëœ 영역(삜입, 수정, 삭제)에 대한 였디였 신혞 추가 diff 펞집Ʞ에서 ì‚Źìš© 가늄한 접귌성 ë·°ì–Ž 추가 (êž°ìĄŽ diff 펞집Ʞ와 동음하êȌ F7 킀넌 ì‚Źìš©í•˜ì—Ź 활성화 가늄) activeEditorState 윈도우 제ëȘ© ëł€ìˆ˜ 추가 ìƒˆëĄœìšŽ ⚙window.title ëł€ìˆ˜ìž activeEditorStateë„Œ 추가했슔니닀. 읎제 수정된 파음 상태, ëŹžì œ 수, Copilot Edits로 변êČœëœ 파음 ì—Źë¶€ 등을 윈도우 제ëȘ©ì—ì„œ 확읞할 수 있슔니닀. 화멎 읜Ʞ ë„ê”Ź(Screen Reader) 씜적화 ëȘšë“œì—ì„œ êž°ëłžì ìœŒëĄœ 활성화되얎 있윌며, ⚙accessibility.windowTitleOptimized 섀정을 톔핎 ëč„활성화할 수 있슔니닀. ì›ŒíŹëČ€ìč˜ ê°œì„  ì‚Źí•­ ëłŽìĄ° ì‚ŹìŽë“œë°”(Secondary Side Bar)에서 레읎랔 ì‚Źìš© 가늄 읎제 ëłŽìĄ° ì‚ŹìŽë“œë°”ì˜ 뷰가 아읎윘 대신 레읎랔을 í‘œì‹œí•˜ë„ëĄ 변êČœí–ˆìŠ”ë‹ˆë‹€. 읎는 팹널 영역에서ìČ˜ëŸŒ 각 ë·°ë„Œ 쉜êȌ ê”Źë¶„í•  수 ìžˆë„ëĄ 도와쀍니닀. 예넌 듀얎, Copilot Edits와 Copilot Chat ë·°ë„Œ 더 ëȘ…확하êȌ ê”Źë¶„í•  수 있슔니닀. 필요하닀멎 ⚙workbench.secondarySideBar.showLabels 섀정을 ì‚Źìš©í•˜ì—Ź 아읎윘 ëȘšë“œëĄœ 닀시 변êČœí•  수도 있슔니닀. 읎 왞에도 더 많은 업데읎튞가 있슔니닀! 더 자섞한 ì •ëłŽë„Œ 볎렀멎 êł”ì‹ ëŠŽëŠŹìŠ€ 녞튞넌 확읞하섞요. View the full article
  8. We are excited to invite you to an informative session organized by the Tech for Social Impact team! This event provides a great opportunity to learn from experts and participate in meaningful discussions about the unique security challenges faced by nonprofits. Furthermore, we will offer go-to-market resources tailored specifically for nonprofits that partners can take advantage of. The session will feature our speakers, Jerry Carlson and Aysha Kaushik, who will offer valuable insights and strategies to enhance security within nonprofit organizations. TOPIC: Partner Webinar - Security Conversations with Nonprofits WHEN: Wednesday, April 2, 2025 TIME: 8:00 AM – 9:30 AM PT / 11:00 AM – 12:30 PM ET / 4:00 PM – 5:30 PM GMT Register today for this online event! Be sure to follow our Partners for Social Impact (nonprofit) discussion board to stay up to date on all Nonprofit announcements! View the full article
  9. 👋 Welcome Back! We’re thrilled to bring you the latest updates from the Arc Jumpstart team in this month’s newsletter. Whether you are new to the community or a regular Jumpstart contributor, this newsletter will keep you informed about new releases, key events, and opportunities to get involved in within the Azure Adaptive Cloud ecosystem. Check back each month for new ways to connect, share your experiences, and learn from others in the Adaptive Cloud community. đŸ€© Jumpstart Hits 5 Years with a Refreshed Website February 2025 marks a significant milestone as we celebrate five incredible years of Arc Jumpstart. To commemorate this journey of innovation and community collaboration, we recently launched our brand-new Arc Jumpstart website (5 years of Arc Jumpstart with a refreshed website | Microsoft Community Hub). Without spoiling too much, here is a quick overview of some of the updates: Light and Dark Modes: Choose between light and dark themes to suit your preference. Responsive Design: Enjoy a seamless experience across all devices, from desktops to smartphones. Improved Accessibility: We have incorporated features to ensure everyone can navigate and utilize our platform with ease. Streamlined GitHub Issues: We now have a more structured and efficient way to ensure every issue counts! Explore the new website and continue your journey with us at Arc Jumpstart. 💧 Jumpstart Drops 5 new Jumpstart Drops just dropped! We love seeing community contributions and are always welcoming more. If you have something exciting to share, be sure to check out our contribution guidelines and submit a Jumpstart Drop! Here’s what’s new: Integrating Litmus Edge with Azure IoT Operations Azure Arc Windows Server Management License Activation Connecting IIoT Gateway to Azure IoT Operations Using Secret Store extension to fetch secrets in Azure Arc-enabled Kubernetes cluster Connecting PLC using Modbus and Dapr to Azure IoT Operations Visit Jumpstart Drops today! 🚀 Jumpstart Gems Diagrams Release The ACX Evaluation & Community and the Arc Jumpstart team are happy to share the new Jumpstart Gems diagrams. Hold on, is “Jumpstart Gems” is a new thing? In case you missed it, just before the holidays, The Jumpstart team ⚡ officially launched the newly branded "Jumpstart Gems" 💎, your one-stop-shop for all-things adaptive cloud architecture diagrams. We also introduced our new "Treasure Hunter" Jumpstart community badge đŸ„‡! Release Notes Note: Changelog file can be found in the bundle. Posters are provided in both PPTX and PDF file formats. All posters are designed to be printed on a large kanban and can be used as a great swag giveaway at various events. Getting Started Jumpstart Gems is published under the Arc Jumpstart website assets page. ⚡Jumpstart Lightning We’re excited to announce that three new Jumpstart Lightning videos are now available! If you’re looking for podcast-style, fast-paced, insightful discussions on Adaptive Cloud technologies, these latest episodes are a must-watch: Azure Device Registry and Storage | But can it HA? Arc-enabled servers | Is it secured? I didn't know CloudCasa can do that Check out our YouTube Channel here. đŸ€ Stay Connected - Join the April Adaptive Cloud Community Call Please join us on, Wednesday, April 2nd, at 8-9am PST (11-12pm EST) for our April Adaptive Cloud Community Call (join our Teams Channel and download the recurring .ICS invite here). The turnout continues to remain strong from FTEs from the Adaptive Cloud space ranging from CTOs, CEOs, and lead engineers, to the various IoT Connected Community members, and Microsoft MVPs. Our last call peaked at around 90 live attendees and featured topics on Azure Arc Jumpstart Updates, Azure Arc-server Updates, and SSH Posture Control & Security Baseline. Our community calls are open to everyone, including external customers, Microsoft MVPs, and internal employees. If you're new to this initiative, the Azure Adaptive Cloud Community is a public, non-NDA space where valuable information is shared freely. We hold a monthly community call every first Wednesday and maintain a Teams Channel for continuous information exchange. These calls highlight updates from our Adaptive Cloud product groups, including Azure Arc, Azure Local, Azure IoT, and AKS, among others. We also host short talks or demos on Azure Adaptive Cloud technologies and gather feedback from the community on issues, blockers, and use cases. Don't miss the chance to engage with us, and be sure to check out previous recordings on our YouTube channel. đŸ’« Hungry for More? Check out our Jumpstart Release Notes for more information on bug fixes and enhancements. View the full article
  10. Introduction Azure App Service is a powerful platform that simplifies the deployment and management of web applications. However, maintaining application performance and availability is crucial. When performance issues arise, identifying the root cause can be challenging. This is where Auto-Heal in Azure App Service becomes a game-changer. Auto-Heal is a diagnostic and recovery feature that allows you to proactively detect and mitigate issues affecting your application’s performance. It enables automatic corrective actions and helps capture vital diagnostic data to troubleshoot problems efficiently. In this blog, we’ll explore how Auto-Heal works, its configuration, and how it assists in diagnosing performance bottlenecks. What is Auto-Heal in Azure App Service? Auto-Heal is a self-healing mechanism that allows you to define custom rules to detect and respond to problematic conditions in your application. When an issue meets the defined conditions, Auto-Heal can take actions such as: Recycling the application process Collecting diagnostic dumps Logging additional telemetry for analysis Triggering a custom action By leveraging Auto-Heal, you can minimize downtime, improve reliability, and reduce manual intervention for troubleshooting. Configuring Auto-Heal in Azure App Service To set up Auto-Heal, follow these steps: Access Auto-Heal Settings Navigate to the Azure Portal. Go to your App Service. Select Diagnose and Solve Problems. Search for Auto-Heal or go to Diagnostic tools tile and select Auto-Heal. Define Auto-Heal Rules Auto-Heal allows you to define rules based on: Request Duration: If a request takes too long, trigger an action. Memory Usage: If memory consumption exceeds a certain threshold. HTTP Status Codes: If multiple requests return specific status codes (e.g., 500 errors). Request Count: If excessive requests occur within a defined time frame. Configure Auto-Heal Actions Once conditions are set, you can configure one or more of the following actions: Recycle Process: Restart the worker process to restore the application. Log Events: Capture logs for further analysis. Custom Action: You can do the following: Run Diagnostics: Gather diagnostic data (Memory Dump, CLR Profiler, CLR Profiler with Threads Stacks, Java Memory Dump, Java Thread Dump) for troubleshooting. Run any Executable: Run scripts to automate corrective measures. Capturing Relevant Data During Performance Issues One of the most powerful aspects of Auto-Heal is its ability to capture valuable diagnostic data when an issue occurs. Here’s how: Collecting Memory Dumps Memory dumps provide insights into application crashes, high CP or high memory usage. These can be analyzed using WinDbg or DebugDiag. Enabling Logs for Deeper Insights Auto-Heal logs detailed events in Kudu Console, Application Insights, and Azure Monitor Logs. This helps identify patterns and root causes. Collecting CLR Profiler traces CLR Profiler traces capture call stacks and exceptions, providing a user-friendly report for diagnosing slow responses and HTTP issues at the application code level. In this article, we will cover the steps to configure an Auto-Heal rule for the following performance issues: To capture a .NET Profiler/CLR Profiler trace for Slow responses. To capture a .NET Profiler/CLR Profiler trace for HTTP 5XX Status codes. To capture Memory dump for a High Memory usage. Auto-Heal rule to capture .NET Profiler trace for Slow response: 1. Navigate to your App Service on Azure Portal, and click on Diagnose and Solve problems: 2. Search for Auto-Heal or go to Diagnostic tools tile and select Auto-Heal: 3. Click on 'On': 4. Select Request Duration and click on Add Slow Request rule: 5. Add the following information with respect to how much slowness you are facing: After how many slow requests you want this condition to kick in? - After how many slow requests you want this Auto-Heal rule to start writing/capturing relevant data. What should be minimum duration (in seconds) for these slow requests? - How many seconds should the request take to be considered as a slow request. What is the time interval (in seconds) in which the above condition should be met? - In how many seconds, the above defined slow request should occur. What is the request path (leave blank for all requests)? - If there is a specific URL which is slow, you can add that in this section or leave it as blank. In the below screenshot, the rule is set for this example "1 request taking 30 seconds in 5 minutes/300 seconds should trigger this rule" Add the values in the text boxes available and click "Ok" 6. Select Custom Action and select CLR Profiler with Thread Stacks option: 7. The tool options provide three choices: CollectKillAnalyze: If this option is selected, the tool will collect the data, analyze and generate the report and recycle the process. CollectLogs: If this option is selected, the tool will collect the data only. It will not analyze and generate the report and recycle the process. Troubleshoot: If this option is selected, the tool will collect the data and analyze and generate the report, but it will not recycle the process. Select the option, according to your scenario: Click on "Save". 8. Review the new settings of the rule: Clicking on "Save" will cause a restart as this is a configuration level change and for this to get in effect a restart is required. So, it is advised to make such changes in non-business hours. 9. Click on "Save". Once you click on Save, the app will get restarted and the rule will become active and monitor for Slow requests. Auto-Heal rule to capture .NET Profiler trace for HTTP 5XX Status code: For this scenario, Steps 1, 2, 3 will remain the same as above (from the Slow requests scenario). There will be following changes: 1. Select Status code and click on Add Status Code rule 2. Add the following value with respect to what Status code or range of status code you want this rule to be triggered by: Do you want to set this rule for a specific status code or a range of status codes? - Is it single status code you want to set this rule for or a range of status code. After how many requests you want this condition to kick in? - After how many requests throwing the concerned status code you want this Auto-Heal rule to start writing/capturing relevant data. What should be the status code for these requests? - Mention the status code here. What should be the sub-status code for these requests? - Mention the sub-status code here, if any, else you can leave this blank. What should be the win32-status code for these requests? - Mention the win32-status code here, if any, else you can leave this blank. What is the time interval (in seconds) in which the above condition should be met? - In how many seconds, the above defined status code should occur. What is the request path (leave blank for all requests)? - If there is a specific URL which is throwing that status code, you can add that in this section or leave it as blank. Add the values according to your scenario and click on "Ok" In the below screenshot, the rule is set for this example "1 request throwing HTTP 500 status code in 60 seconds should trigger this rule" After adding the above information, you can follow the Steps 6, 7 ,8, 9 from the first scenario (Slow Requests) and the Auto-Heal rule for the status code will become active and monitor for this performance issue. Auto-Heal rule to capture Memory dump for High Memory usage: For this scenario, Steps 1, 2, 3 will remain the same as above (from the Slow requests scenario). There will be following changes: 1. Select Memory Limit and click on Configure Private Bytes rule: 2. According to your application's memory usage, add the Private bytes in KB at which this rule should be triggered: In the below screenshot, the rule is set for this example "The application process using 2000000 KB (~2 GB) should trigger this rule" Click on "Ok" 3. In Configure Actions, select Custom Action and click on Memory Dump: 4. The tool options provide three choices: CollectKillAnalyze: If this option is selected, the tool will collect the data, analyze and generate the report and recycle the process. CollectLogs: If this option is selected, the tool will collect the data only. It will not analyze and generate the report and recycle the process. Troubleshoot: If this option is selected, the tool will collect the data and analyze and generate the report, but it will not recycle the process. Select the option, according to your scenario: 5. For the memory dumps/reports to get saved, you will have to select either an existing Storage Account or will have to create a new one: Click on Select: Create a new one or choose existing: 6. Once the storage account is set, click on "Save". Review the rule settings and click on "Save". Clicking on "Save" will cause a restart as this is a configuration level change and for this to get in effect a restart is required. So, it is advised to make such changes in non-business hours. Best Practices for Using Auto-Heal Start with Conservative Rules: Avoid overly aggressive auto-restarts to prevent unnecessary disruptions. Monitor Performance Trends: Use Azure Monitor to correlate Auto-Heal events with performance metrics. Regularly Review Logs: Periodically analyze collected logs and dumps to fine-tune your Auto-Heal strategy. Combine with Application Insights: Leverage Application Insights for end-to-end monitoring and deeper diagnostics. Conclusion Auto-Heal in Azure App Service is a powerful tool that not only helps maintain application stability but also provides critical diagnostic data when performance issues arise. By proactively setting up Auto-Heal rules and leveraging its diagnostic capabilities, you can minimize downtime and streamline troubleshooting efforts. Have you used Auto-Heal in your application? Share your experiences and insights in the comments! Stay tuned for more Azure tips and best practices! View the full article
  11. Support for Windows 10 and other Microsoft products will end on October 14, 2025. Get tips to help you prepare your organization to navigate these milestones. Watch Windows 10 EOS: Myths, misconceptions, and FAQs – now on demand – and join the conversation at https://aka.ms/EndOfSupportFAQs. To help you learn more, here are the links referenced in the session: Products affected by End of Support in 2025 Windows 10​ (Non-LTSC) Office ​2016 | 2019 Exchange Server ​2016 | 2019 Visio ​2016 | 2019 Project ​2016 | 2019​ Skype for ​Business Server ​2015 | 2019 Also, monitor Windows Release Health, for updated information as it becomes available. Leverage Day 1 with Windows 11: A Quick Tour to inform your users on how to get the most out of their new Windows 11 desktop Where to start with your Windows 10 upgrade Windows deployment documentation | Microsoft Learn If you qualify for FastTrack benefits, our FastTrack resources can help you in your journey For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  12. Learn how to manage your migration from traditional PCs and legacy VDI to Windows 365 Enterprise and Windows 365 Frontline Cloud PCs. Watch Windows cloud migration and deployment best practices – now on demand – and join the conversation at https://aka.ms/CloudMigrationPractices. To help you learn more, here are the links referenced in the session: Windows 365 Link—the first Cloud PC device for Windows 365 Learn more about Windows in the cloud Learn more about Cloud Endpoints See a Windows 365 interactive, self-service demo Learn more about Windows in the Cloud Learn more about Windows 365 Learn more about Azure Virtual Desktop Learn more about Microsoft Intune Suite Check out our assets on migration Windows IT Pro Blog: Windows 365 Migration: It’s easier than you think Join the Windows 365 and Azure Virtual Desktop Tech Communities at Microsoft Tech Community For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  13. You're launching Viva Engage! Microsoft and the Viva Engage team are here to help. A solid adoption and training plan is key to building a successful network. This Masterclass aims to ensure you understand how to get the most out all Engage’s features even if you’re just getting started. And we’ll help you understand how to train audiences to scale effectiveness of your initiatives, leaders and knowledge discovery. If you already have a Viva Engage network, you'll find tips to refresh and innovate. Overview of the session In Week 4 of our Monday Masterclass, we will cover best practices for launching Viva Engage and training users effectively. Learn how to: Understand the fundamentals of core features like email digests, notifications and the home feed and enable your organization to use them effectively Develop a plan to launch and adopt Viva Engage Bring together the people you need to make the launch successful, including how to have a great champions plan. Offer specific training to communicators, leaders, and community admins. Train end users on how to explore Viva Engage, then engage with other users. Resources Fundamentals Helping you understand the basics of Viva Engage and how to get the most out of core capabilities. Home Feed Notifications Email Digests Communities Storylines Intermediate Build a launch plan and understand how to drive adoption while clarifying how Viva Engage fits in with the rest of your tools & channels. Which tool when Launch plan & adoption guide Advanced Get the most out of Engage with advanced features helping you to measure, support leaders and manage change. Monitoring Viva Engage Getting leaders on Engage Champions networks Using Engage for change management Join us in our week 4 Masterclass! In this masterclass, you’ll be joined by product experts who have helped organizations adopt and train their employees to use Viva Engage Want to join more of the upcoming Masterclass series? Register here! Introducing Viva Engage Masterclass: Your Guide to the Viva Engage Essentials | Microsoft Community Hub View the full article
  14. Dive into the latest updates for Microsoft Intune Enterprise App Management, then learn how to leverage Microsoft Graph to take it even further. Watch Enterprise Application Management with Microsoft Graph – now on demand – and join the conversation at https://aka.ms/EAMWithGraph. To help you learn more, here are the links referenced in the session: What’s what with app management in the enterprise​ Use the Microsoft Graph API - Microsoft Graph​ Developer's guide to Microsoft Graph Intune devices and apps API overview - Microsoft Graph​ Enterprise Application Management​ For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  15. This is your moment to grab the attention of the global developer community. Share your latest tech firsthand, showcase your dev resources and tools, and connect with cutting-edge coders, creators, and influencers. Learn more about sponsoring Microsoft Build View the full article
  16. Take a closer look at key features and functionalities of Microsoft Intune Remote Help for Windows, Android, and macOS devices so you can start utilizing it today. Watch Secure helpdesk support using Intune Remote Help – now on demand – and join the conversation at https://aka.ms/SecureHelpdeskSupport. For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  17. Find the answers you need to help your organization become cloud-ready. Watch AMA: Cloud native with Microsoft Intune – now on demand – and join the conversation at https://aka.ms/AMA/CloudNativeWithIntune. For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  18. Windows LAPS continues to evolve. Find out what's new - from automatic account management and passphrases to disaster recovery and bug fixes. Watch The latest and greatest in the world of Windows LAPS – now on demand – and join the conversation at https://aka.ms/LatestInLAPS. To help you learn more, here are the links referenced in the session: Automatic account management demo Passphrase support demo Rollback detection demo Password recovery demo What is Windows LAPS? Windows LAPS feedback For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  19. Secure, reliable, easy to use. Dive deep into the latest innovations in device management for frontline workers with Microsoft Intune. Watch Device management for the frontline: Intune to the rescue – now on demand – and join the conversation at https://aka.ms/IntuneToTheRescue. To help you learn more, here are the links referenced in the session: Work Trend Index Special Report: Technology Can Help Unlock a New Future for Frontline Workers Device Staging on Apple devices: To stage a device, set up VPP deployment for the Company Portal app, then configure and deploy a specific app configuration policy. To learn more, go to: https://aka.ms/Intune/FLW-home https://aka.ms/Intune/FLW-healthcare For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  20. Need to dynamically scale Azure Virtual Desktop session hosts to meet your usage needs? Watch Azure Virtual Desktop hostpool management at scale – now on demand – and join the conversation at https://aka.ms/AVDHostpoolManagement. To help you learn more, here are the links referenced in the session: Watch Azure Virtual Desktop: Everything You Need to Know to explore the full capabilities of Azure Virtual Desktop! For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  21. Flexibility, scalability, and seamless integration within Windows environments in the cloud. See how App Attach with Azure Virtual Desktop supports MSIX, App-V, and other solutions. Watch Azure Virtual Desktop app management – now on demand – and join the conversation at https://aka.ms/AVDAppManagement. To help you learn more, here are the links referenced in the session: Framework packages can be added to a custom image via scripts to prepare for any MSIX package. The script to install MSIX frameworks can be found here. For more free technical skilling on the latest in Windows, Windows in the cloud, and Microsoft Intune, view the full Microsoft Technical Takeoff session list. View the full article
  22. We’re excited to announce that you can now use custom backgrounds for your basic plans in both Planner in Microsoft Teams and Planner for the web. This addition was a top feature request when we launched the new Planner, and it aims to make your planning more visually appealing and organized. What are custom backgrounds, and why use them? Custom backgrounds allow you to easily distinguish between different plans. Powered by AI, background suggestions are tailored based on the name of your plan, so you can quickly identify and navigate to the specific projects you are working on without confusion. Furthermore, backgrounds enable you to customize your team projects in a way that’s fun and aesthetically pleasing. How to add custom backgrounds To add a custom background to your plan, follow these steps: Open the plan details of any basic plan by either selecting the plan name or the dropdown menu next to the plan name in the header. The Plan details pane will open to the right with suggested backgrounds tailored to your plan. Select the background you want to apply. Try it today Smart backgrounds are available in the Planner app in Microsoft Teams and Planner for the web. Try it out today and let us know what you think! There are several ways to share your feedback with us—either via the Planner Feedback Portal or directly in the Planner app by selecting More (the question mark) in the upper right corner, then Feedback. Resources Check out the Planner adoption page. Sign up to receive future communication about Planner. Check out the Microsoft 365 roadmap for feature descriptions and estimated release dates. Watch Planner demos for inspiration on how to get the most out of Planner. Watch the recording from September's What’s New and What’s Coming Next + AMA about the new Planner. Visit the Planner help page to learn more about the capabilities in Planner. View the full article
  23. Hi all, I have a customer who likes to POC Azure always-on VPN. Customer wants to avoid entering credentials to login to VPN. Is there a document that shows the steps to enable SSO? Is Intune required to enable SSO? Thanks. View the full article
  24. Hi everyone! Tyson Paul here with this month’s “Check This Out!” (CTO!) guide. Our goal with these posts is to guide you toward content that piques your interest, whether it's for learning, troubleshooting, or discovering new sources. Each month, we’ll give you a snapshot of intriguing blog content, provide direct links to the source material, and introduce you to other valuable blogs you might not know about yet. If you’re a long-time reader, you’ll notice this series is similar to our previous “Infrastructure + Security: Noteworthy News” series. We hope you find this new format just as helpful and engaging. Thank you for your continued support from all of us on the Core Infrastructure and Security Tech Community blog team! Title: Lab: Manage Virtual Networks at Scale with Azure Virtual Network Manager (AVNM) Team Blog: Azure Networking Author: andreamichael Publication Date: 03/05/2025 Article Summary: The article introduces a lab for learning Azure Virtual Network Manager (AVNM) focused on managing virtual networks at scale. The lab provides an overview of AVNM's capabilities, including setting up connectivity, security, and routing configurations for virtual networks. It guides users through deploying Azure Resource Manager (ARM) templates, creating network managers, grouping networks, and setting up hub-and-spoke topologies. The lab also covers IP address management, security rule implementation, and analysis with AVNM's virtual network verifier tool. Participants are advised to ensure proper permissions, deploy resources, and follow clean-up procedures after the lab. Title: What’s new in Microsoft Intune: February 2025 Team Blog: Microsoft Intune Author: ScottSawyer Publication Date: 02/27/2025 Article Summary: In February 2025, Microsoft Intune introduced several enhancements to balance productivity and security. Key updates include improvements to the Managed Home Screen for Android, featuring QR code authentication for sign-in and custom ringtone selection to reduce confusion in environments with shared devices. The release also includes a more detailed device information page to aid troubleshooting. Additionally, the Device query feature for Windows devices, now generally available, allows IT professionals to swiftly assess configurations and detect inconsistencies across multiple devices, improving efficiency and decision-making. These updates aim to enhance user empowerment while maintaining robust security protocols. Title: Azure File Sync: faster, more secure and Windows Server 2025 support Team Blog: Azure Storage Author: Vritika Publication Date: 02/21/2025 Article Summary: Azure File Sync has introduced several updates enhancing performance, security, and compatibility, including a 7x faster server onboarding and a 10x increase in sync performance. It now supports Windows Server 2025, enabling improved scalability, security, and cloud integration. The platform integrates with Azure's Copilot for AI-driven troubleshooting and has added managed identities for secure authentication. These advancements streamline server provisioning, boost sync efficiency, and offer centralized management through the Windows Admin Center. Together, these features enhance Azure File Sync's role in facilitating seamless data migration and efficient, secure cloud integration for businesses. Title: Announcing General Availability of Azure Dl/D/E v6 VMs powered by Intel EMR processor & Azure Boost Team Blog: Azure Compute Author: AndyJia_Azure Publication Date: 02/10/2025 Article Summary: Microsoft Azure has introduced the General Availability of its Dl/D/E v6 series Virtual Machines, powered by Intel's 5th Gen Xeon processors, offering enhanced performance for both General Purpose and Memory Optimized workloads. The VMs, available in multiple configurations, feature improved scalability, local and remote NVMe SSD support, and Azure Boost technology for enhanced storage and network capabilities. They deliver significant performance improvements, including up to 400k IOPS, 200 Gbps network bandwidth, and a 4x boost in AI workloads. These VMs are now available across multiple Azure regions, with more to follow. Title: Active Directory is 25 Years Old. Do You Still Manage It Like It's 1999? Team Blog: Core Infrastructure and Security Author: LizTesch Publication Date: 03/06/2025 Article Summary: The article, written by Liz Tesch, emphasizes the need for modern management practices for Microsoft's Active Directory, which is 25 years old. Despite its longevity, many organizations still manage AD as if it were the late 1990s, exposing themselves to security risks due to outdated practices such as location-based OU structures, over-privileged service accounts, flat support structures, and ineffective deprovisioning processes. To mitigate these risks, organizations should align their AD structure with current security models, review and limit privileges of service accounts, streamline access controls, and ensure robust deprovisioning processes for both human and service accounts. Title: Way to minimize the impact of Allocation Failure issue in Cloud Service Extended Support Team Blog: Azure PaaS Author: JerryZhangMS Publication Date: 02/21/2025 Article Summary: The article addresses mitigating the impact of Allocation Failure in Cloud Service Extended Support (CSES). While the common solutions like redeployment lead to downtime, the blog offers a strategy to minimize disruption by switching requests to a newly created service. This involves creating a new CSES with updated settings and redirecting traffic via domain name adjustments. For custom domains, this means updating CNAME or A records. For scenarios using FQDN, a brief downtime may occur due to DNS changes. The article asserts these methods can significantly reduce downtime, aiming for zero downtime with custom domains and under one minute for FQDN scenarios. Title: 5 years of Arc Jumpstart with a refreshed website Team Blog: Azure Arc Author: liorkamrat Publication Date: 02/24/2025 Article Summary: In February 2025, Arc Jumpstart celebrates five years by launching a redesigned website, enhancing user experience with features like dark/light mode, improved accessibility, responsive design, and streamlined navigation. The update aligns with the mission to support the Microsoft Adaptive Cloud approach, focusing on automation, scalability, and open-source collaboration. New features like Jumpstart Gems and Badges aim to enrich user engagement and cloud proficiency. Enhanced GitHub issue templates facilitate feedback and maintenance. Arc Jumpstart evolves to unify distributed systems, integrate AI, and enable operations across hybrid, multicloud, edge, and IoT environments. Title: We're moving! Team Blog: Azure Stack Author: Cosmos_Darwin Publication Date: 11/25/2024 Article Summary: Microsoft has announced Azure Local as a new chapter for adaptive cloud infrastructure, replacing Azure Stack HCI and offering features like lower-cost edge devices and disconnected operations, with seamless transition for existing users. All related content will move to the Azure Arc blog as part of a unification process. This change was introduced at Microsoft Ignite 2024, and the team expresses gratitude for user engagement over the years. Azure Local, powered by Azure Arc, promises continued innovation and encourages followers to stay updated on the Azure Arc blog. Title: Securely Integrating Azure API Management with Azure OpenAI via Application Gateway Team Blog: Azure Architecture Author: Sabyasachi-Samaddar Publication Date: 02/25/2025 Article Summary: The article outlines a technical guide for securely integrating Azure OpenAI with Azure API Management (APIM) using Azure Application Gateway. It addresses the need for enterprises to secure Azure OpenAI, which can be exposed over the public internet, by implementing a solution that confines traffic within an Azure Virtual Network (VNET) using Private Endpoints. The strategy involves deploying APIM within an internal VNET as a secure proxy, utilizing Application Gateway for secure external access with Web Application Firewall (WAF) rules and SSL termination. The guide details the configuration of VNETs, subnets, and Network Security Groups (NSGs) to ensure network segmentation and security. This scalable architecture protects OpenAI from direct internet exposure while permitting controlled API access, leveraging managed identity authentication and enforcing granular network control. Title: New survey - Windows Server application survey! Team Blog: Containers Author: ViniciusApolinario Publication Date: 01/21/2025 Article Summary: Microsoft has launched a new survey aimed at gathering insights on how customers approach Windows Server application modernization. The survey seeks to understand challenges, modernization processes, and triggers from customers to help Microsoft align its goals and prioritize work for future developments. The company values customer feedback to enhance their products and is encouraging participation in the survey to shape its plans for the upcoming years. Participants can access the survey at https://aka.ms/WSAppModSurvey and are encouraged to share the link with others. Title: SMB security hardening in Windows Server 2025 & Windows 11 Team Blog: Storage at Microsoft Author: NedPyle Publication Date: 08/23/2024 Article Summary: Microsoft’s Secure Future Initiative (SFI) has introduced enhanced SMB security features in Windows 11 24H2 and Windows Server 2025. Key updates include mandatory SMB signing by default, NTLM blocking to enforce Kerberos authentication, and an authentication rate limiter to mitigate brute force attacks. Other enhancements include disabling insecure guest authentication, enforcing SMB protocol version management, and supporting SMB client encryption and SMB over QUIC across all Windows Server 2025 editions. These updates aim to bolster security by minimizing vulnerabilities in SMB, a crucial protocol for remote file and data access. Users can preview these OS updates now. Title: Azure Private Endpoint vs. Service Endpoint: A Comprehensive Guide Team Blog: FastTrack for Azure Author: SriniThumala Publication Date: 01/06/2025 Article Summary: The article compares Azure Private Endpoints and Service Endpoints as methods for enhancing security and connectivity for applications hosted on Microsoft Azure. Service Endpoints provide secure connections using public IPs routed through Azure's network, suitable for basic security needs with Network Security Group integration. Private Endpoints offer higher security by using private IPs, ensuring traffic remains internal for sensitive workloads or regulatory compliance. Use Service Endpoints for simpler security setups and reduced latency; choose Private Endpoints for full network isolation and strict security. The article advises selecting based on application security needs and performance requirements. Title: Optimizing your Hyper-V hosts Team Blog: Windows OS Platform Author: Steven Ekren Publication Date: 02/12/2025 Article Summary: The article provides insights on optimizing Hyper-V hosts by leveraging CPU scheduling and live migration settings. It discusses the relationship between physical CPUs, cores, and logical processors, detailing how virtual processors (VPs) are managed. Key optimization strategies include dedicating CPUs to the host via MinRoot to minimize resource contention, setting appropriate limits for live migrations to balance speed and system impact, and utilizing network configurations like RDMA for efficient data transfers. The article highlights tools and commands, such as Performance Monitor and PowerShell, to evaluate and implement these optimizations effectively. Title: Revolutionizing Network Management and Performance with ATC, HUD and AccelNet on Windows Server 2025 Team Blog: Networking Author: AnirbanPaul Publication Date: 11/04/2024 Article Summary: The release of Windows Server 2025 introduces three significant innovations in network management: Network ATC, Network HUD, and AccelNet. Network ATC simplifies network configurations by automating deployments and ensuring consistency across clusters, reducing errors, and handling configuration drift. Network HUD is designed to detect, prevent, and alert on network issues using real-time data analysis, ensuring stability across physical and virtual components. AccelNet optimizes SR-IOV management for virtual machines, enhancing high-performance network workloads by reducing latency while simplifying configuration and health monitoring. Together, these features enhance network efficiency and reliability, making them vital for modern digital environments. Title: Azure Virtual Desktop now supports Azure Extended Zones Team Blog: Azure Virtual Desktop Author: TomHickling Publication Date: 11/25/2024 Article Summary: Azure Virtual Desktop now supports deployment in Azure Extended Zones, enhancing location options for low-latency and data-residency workloads in metropolitan areas. The first zone is in Los Angeles, California. Access requires a request, and deploying host pools differs slightly due to the lack of a default outbound route. Internet access can be facilitated using Azure Load Balancer, Azure Firewall, or third-party firewalls. The Azure portal now allows creation or selection of a Load Balancer during host pool setup. Limited VM family availability is noted due to zone size. More details are available through specified Azure resources. Title: ADSS TSync vs Entra Cross-Tenant Sync: A Comprehensive Comparison Team Blog: Security, Compliance, and Identity Author: SankaraNarayananMS Publication Date: 03/06/2025 Article Summary: The article compares ADSS Tenant Sync and Entra Cross-Tenant Sync for managing identities across multiple Azure AD tenants. ADSS Tenant Sync, managed by Microsoft's consulting team, offers a centralized, customizable synchronization model ideal for complex organizations needing advanced features. In contrast, Entra Cross-Tenant Sync, a native Microsoft feature, provides a cost-effective, integrated solution with simpler authentication, limiting customization but emphasizing ease of management. The choice between them depends on an organization's needs for customization, budget, and integration with existing systems. Both aim to streamline identity management across tenants in different ways. Title: 3 internal obstacles to overcome for comprehensive security Team Blog: FastTrack Author: JulieHersum Publication Date: 01/28/2025 Article Summary: Organizations face significant cybersecurity challenges, with frequent incidents and high costs. Microsoft emphasizes comprehensive security solutions, such as Microsoft Defender XDR, to protect data and technology. However, deploying these solutions can be hindered by internal obstacles, including reluctance to replace legacy systems due to sunk cost fallacy, concerns about secure integration, and resource constraints. To overcome these issues, Microsoft offers resources like FastTrack to facilitate easier deployment. By adopting Microsoft Defender, organizations can achieve unified security, improve their security posture, and protect against cyber threats more effectively and efficiently. Title: Cloud security in the fast lane: Navigating PaaS challenges Team Blog: Azure Infrastructure Author: seanwhalen Publication Date: 03/06/2025 Article Summary: The article discusses the security challenges and strategies associated with Platform as a Service (PaaS) in cloud computing. As PaaS promotes innovation and scalability, it also introduces unique security hurdles, such as network integration issues, data exfiltration risks, a lack of infrastructure visibility, and insider threats. The article highlights the importance of adopting zero-trust models, strong access controls, and continuous monitoring to protect sensitive data. Azure's network security perimeter is presented as a comprehensive solution to enhance security through micro-segmentation, data exfiltration prevention, and unified security management, critical amidst increasing PaaS attacks. Title: Step-by-Step Guide : How to use Temporary Access Pass (TAP) with internal guest users Team Blog: ITOps Talk Author: dishanfrancis Publication Date: 01/13/2025 Article Summary: The article discusses the benefits of passwordless authentication, highlighting its enhanced security compared to traditional password-based methods. Microsoft Entra ID supports various passwordless authentication options such as Windows Hello, Microsoft Authenticator, and Passkeys (FIDO2). The article focuses on the use of Temporary Access Pass (TAP) as an initial authentication method to enable passwordless options. Originally available only for internal users, TAP now supports internal guest users—accounts in the same directory but with guest-level access, like contractors. The article walks through setting up TAP for internal guest users, ensuring a more secure login process. Title: Removal of Azure Policy aliases for Microsoft.Insights/alertRules Team Blog: Azure Governance and Management Author: ShannonHicks Publication Date: 03/05/2025 Article Summary: The article discusses the deprecation of the Microsoft.Insights/alertRules resource type and the removal of associated Azure Policy aliases. As a result, policies referencing these aliases will not be evaluated, with little impact expected since they usually target already-removed resource types. Attempts to modify such policy definitions will be blocked. Affected built-in policies, including "Metric alert rules should be configured on Batch accounts," will also be deprecated. To mitigate effects, users should identify affected policies, update their definitions, test the updates, and monitor for future Azure Policy changes to ensure continued compliance and governance. Title: New Cluster-Wide Control For Virtual Machine Live Migrations In Windows Server and Azure Stack HCI Team Blog: Failover Clustering Author: Steven Ekren Publication Date: 01/05/2023 Article Summary: The article discusses a new feature in Windows Server 2022 and Azure Stack HCI, which simplifies managing parallel live migrations in a cluster by introducing the MaximumParallelMigrations cluster property. Previously, administrators had to manually configure each node, but the new property allows a single setting to be inherited by all nodes within a cluster, even when new servers are added. This ensures consistent configuration across the cluster. The default value is one parallel migration, but administrators can adjust this based on their system's capabilities. It enhances reliability and simplifies management across diverse systems. Title: Daily schedule: Microsoft in-booth sessions at NVIDIA GTC Team Blog: Azure High Performance Computing (HPC) Author: SarahYousuf Publication Date: 03/06/2025 Article Summary: The article details Microsoft's participation at the NVIDIA GTC AI Conference from March 17-21 in San Jose, CA, outlining daily sessions at Microsoft's booth #514. Key sessions include discussions on AI applications across industries, integrating NVIDIA technologies with Azure cloud services. Topics range from AI-driven manufacturing processes, rare disease detection, large language models, and AI infrastructure to generative AI applications. Presentations also cover Azure's confidential computing and NetApp Files, emphasizing Microsoft's AI innovation and collaborations with NVIDIA to enhance performance, scalability, and security in AI deployments. The blog encourages attendees to engage with Microsoft's AI offerings at the event. Title: From the frontlines: Revolutionizing healthcare workers experience Team Blog: Intune Customer Success Author: Intune_Support_Team Publication Date: 02/28/2025 Article Summary: The article by Catarina Rodrigues discusses the transformative impact of technology in healthcare, focusing on Microsoft's Intune platform that manages mobile devices in critical environments like hospitals. Intune enhances healthcare operations by securing data access and allowing seamless device management across platforms. Within ICU settings, Android tablets are used to provide nurses with crucial patient information. With Intune, these devices can operate safely with shared access, authenticated sign-ins, and timely updates. The blog highlights the flexibility and security of Intune, illustrating how it streamlines communication and workflow for healthcare professionals, ultimately improving patient care. Title: Team Blog: Windows IT Pro Author: Publication Date: Article Summary: Title: Collecting Debug Information from Containerized Applications Team Blog: Ask The Performance Team Author: Becky Publication Date: 11/17/2023 Article Summary: The article, written by Debug Engineer Will Aftring, guides developers and IT admins on collecting debug information from containerized Windows applications. It highlights the complexities of migrating applications to containers, detailing steps such as identifying dependencies, configuring settings, and managing network communications. The author provides troubleshooting techniques when applications within containers fail to run correctly, including checking console logs, accessing log files, and using external tools for debugging. Strategies for handling memory dumps are also discussed. The article aims to simplify the debugging process and assist in the efficient transition of applications to a containerized environment. Title: Announcement: System Center 2025 is GA Team Blog: System Center Author: AakashMSFT Publication Date: 11/07/2024 Article Summary: System Center 2025 is now generally available, enhancing datacenter operations with a focus on infrastructure modernization and security. New features include support for heterogeneous infrastructure management, enhanced security with reduced reliance on legacy authentication, and improved management capabilities with Azure Arc integration. It supports the latest Windows Server 2025 and provides tools for managing virtual machines, enhancing data security, and streamlining IT operations. Key updates include seamless Azure integration, enhanced generation 2 VM support, and the discontinuation of obsolete features. Users can access System Center 2025 through the evaluation center or Microsoft Admin Center to explore these enhancements. Title: Microsoft Cost Management updates—February 2025 (summary) Team Blog: FinOps Author: flanakin Publication Date: 03/05/2025 Article Summary: The February 2025 Microsoft Cost Management updates include new AccountId and InvoiceSectionId columns in cost details datasets for better cost allocation. Users can now access Copilot directly from the Cost Management overview with sample prompts. Updates about the FinOps Open Cost and Usage Specification are available in the Learning FOCUS blog series. New cost-saving features include changes in Azure Reserved VM Instances, Azure NetApp Files support, Azure DevTest Labs hibernation, and Azure Monitor diagnostics. Also introduced are improvements in documentation, API modernization, and new AKS monitoring experiences. Title: Hyper-V HyperClear RETbleed Update Team Blog: Virtualization Author: brucesherwin Publication Date: 07/19/2022 Article Summary: The article discusses recent disclosures of speculative execution side channel vulnerabilities in Intel and AMD processors, specifically CVE-2022-23825, CVE-2022-29900, CVE-2022-29901, and CVE-2022-28693, similar to the Spectre attack. Microsoft's virtualization team has been using Hyper-V HyperClear, a mitigation architecture, to protect against these vulnerabilities without significant updates. HyperClear uses three main components: Core Scheduler, Virtual-Processor Address Space Isolation, and Sensitive Data Scrubbing, to maintain strong inter-VM isolation and safeguard against speculative execution attacks with minimal performance impact. Title: Stop Worrying and Love the Outage, Vol IV: Preference items Team Blog: Ask the Directory Services Team Author: Chris_Cartwright Publication Date: 01/28/2025 Article Summary: In the fourth installment of the "Stop Worrying and Love the Outage" series, Chris Cartwright from the Directory Services support team highlights the risks of using Group Policy Preference items that conflict with existing client-side extensions, leading to potential system instability and outages. Using the example of Cipher Suite Ordering, the article illustrates how conflicts between Administrative Templates and Preference items targeting the same registry key can lead to unpredictable outcomes. Cartwright advises against targeting Group Policy registry locations with Preference items, as it creates administrative challenges and system instability, unless it's a necessary workaround for unsupported OS limitations. Title: Protecting the Public IPs of Secured Virtual Hub Azure Firewalls against DDoS Attacks Team Blog: Azure Network Security Author: gusmodena Publication Date: 02/28/2025 Article Summary: The article discusses the enhancement of Azure Firewalls in Secured Virtual Hubs by configuring specific Azure public IPs, enhancing network security against DDoS attacks. This feature allows for complete control and management of public IP addresses, enabling custom configurations aligned with security policies. Azure DDoS IP Protection can be configured to mitigate attacks, maintaining service availability and security. The article provides steps for enabling DDoS IP Protection and discusses benefits such as enhanced security, flexibility in IP address management, and ensuring a robust defense against DDoS attacks, thereby securing the network infrastructure more effectively. Title: Get AI ready: What we’ve learned building AI competency at Microsoft Team Blog: Microsoft Learn Author: SandraMarin Publication Date: 02/13/2025 Article Summary: At Microsoft, developing AI skills and fluency is deemed essential for maximizing the technology's potential. Organizations are encouraged to provide both technical and non-technical team members with AI-learning opportunities, building a foundation for future leadership in the AI era. Jeana Jorgensen, Microsoft's Corporate Vice President of Worldwide Learning, emphasizes the importance of effective AI training programs, acknowledging the unique paths of different organizations. Her blog and the e-book "10 Best Practices to Accelerate Your Employees’ AI Skills" offer practical advice and insights to implement effective AI training, helping organizations to evolve, support employees, and foster innovation. Title: Upcoming Breaking Change in Az SSH for Arc Connections Extension Team Blog: Azure Tools Author: stevenbucher Publication Date: 02/27/2025 Article Summary: The Az SSH extension, crucial for secure Azure VM connections, will undergo a breaking change affecting Azure Arc Machine connections. By May 21, versions prior to 2.0.4 will fail upon installation due to the deprecation of a storage blob. While existing installations will function unless corrupted, reinstalling outdated versions will be impossible. Users should upgrade to at least version 2.0.6 using the Azure CLI to ensure continuity. Additionally, scripts using older versions should be updated. This change is vital for security, and users are advised to stay informed about further updates. Title: Azure VMware Solution Broadcom VMSA-2025-0004 Remediation Team Blog: Azure Migration and Modernization Author: rvandenbedem Publication Date: 03/04/2025 Article Summary: Microsoft recently identified a critical ESXi vulnerability in Azure VMware Solution and collaborated with Broadcom to develop a secure patch. Using advanced analytics for early detection, Microsoft swiftly assembled a global team to work on the ESXi 8.0 U2d Build 24585300 patch. The patch is set for completion within 30 days, ensuring proactive security for customers. New Azure VMware Solutions deployed after March 4, 2025, will have the patch pre-applied. The company's in-depth risk management and partnership with Broadcom enhance overall security, allowing for quick vulnerability responses and effective digital asset protection. Title: Simplify frontline workers’ sign-in experience with QR code authentication Team Blog: Microsoft Entra (Azure AD) Author: Robin Goldstein Publication Date: 02/25/2025 Article Summary: Microsoft has introduced QR code authentication in Microsoft Entra ID, aimed at easing sign-ins for frontline workers on shared devices by eliminating the need for usernames and passwords. This feature, now in public preview, allows employees to scan a unique QR code and enter a personal PIN for fast, secure access to essential applications. The system significantly improves efficiency and security, as demonstrated by Contoso Industries, which is transitioning to QR code authentication to simplify app access for its retail employees. The initial feedback has been positive, highlighting the streamlined authentication process and enhanced security measures. View the full article
  25. REGISTER NOW | Open to All Microsoft Partners We are excited to invite you to an informative session organized by the Tech for Social Impact team! This event provides a great opportunity to learn from experts and participate in meaningful discussions about the unique security challenges faced by nonprofits. Furthermore, we will offer go-to-market resources tailored specifically for nonprofits that partners can take advantage of. The session will feature our speakers, Jerry Carlson and Aysha Kaushik, who will offer valuable insights and strategies to enhance security within nonprofit organizations. TOPIC: Partner Webinar - Security Conversations with Nonprofits WHEN: Wednesday, April 2, 2025 TIME: 8:00 AM – 9:30 AM PT / 11:00 AM – 12:30 PM ET / 4:00 PM – 5:30 PM GMT WHERE: Resister Today (online event) View the full article
×
×
  • Create New...