Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5697
  • Joined

  • Last visited

Everything posted by Windows Server

  1. On the lower left, can I remove the duplicate one, right below the separator? And ‘Devices and drives’ text, just give me a clean Local Disk look. View the full article
  2. Pretty Odd Question But I Am Currently Stuck Without A Mouse For About 3 Quarters Of The Day Except At Night And The Keyboard I Bought Abt Half A Year Ago Has Number-pads Sold Separately. I Have Only Been Left With The Page Up/Down, End And Delete Key Along With The Usual Arrow Keys. View the full article
  3. Exchange Online is imposing a new tenant-wide limit of 3,000 Dynamic Distribution Groups. Few tenants might be affected, but the question might be asked why Microsoft is limiting DDGs at this point. Is it a cunning plan to prompt people to use dynamic Microsoft 365 groups instead? Or are some tenants abusing DDGs in weird and wonderful ways? Who knows, but the limit applies from early April 2025. https://office365itpros.com/2025/03/10/dynamic-distribution-groups-limit/ View the full article
  4. I am a newbie guitarist who has just started to learn fingerpicking skills systematically following a quality tutorial author on YouTube recently. Due to the unstable network, I often miss the live lessons. After communicating with the author himself, I have obtained his written authorization to allow the download of the relevant instructional videos. However, I am not familiar with the official YouTube download mechanism, and I don't know how to preserve the sound quality and subtitles of the videos. I would like to ask for the following help. Please recommended reliable and safe youtube to mp4 converter that works on Windows 11/10 PC. And what are the precautions for maintaining video picture/sound quality after download I look forward to your detailed guidance! If it's convenient, please share the specific software name and configuration parameters in a private message. Thanks in advance! View the full article
  5. I operate a business that mostly depends on design by simulation relying on constant operation at very high CPU utilization of big multi-core PC's. And because of the high utilization, I seem to kill them on a fairly routine basis, 3 Dell 7820 Xeon Gold's in the last two years, which is worse than average, I probably kill one on average every 2nd year. We can talk about what's dying separately, it doesn't matter here, the issue at hand is DOWN TIME. View the full article
  6. If you create a Notepad file which starts with .LOG, every time you launch it, it will log the current timestamp. View the full article
  7. Simple scenario: VM --> vNIC --> vSwitch (external) --> physNIC --> physSwitch The vNIC assigned to the VM has MAC address aa:aa:aa:aa:aa:aa, the physical NIC (physNIC; the vSwitch of type external is connected to it) has bb:bb:bb:bb:bb:bb. What mechanism ensures that when the VM sends a network packet to the external network (the physical network connected to the physical switch physSwitch), the MAC address of its vNIC (aa:aa:aa:aa:aa:aa) is used, and not the MAC address of the physNIC (bb:bb:bb:bb:bb:bb)? In other words: what makes physSwitch "see" aa:aa:aa:aa:aa:aa when the VM communicates to an external endpoint? View the full article
  8. Model Mondays is an intiative to help you build your knowledge of generative AI models through 5-minute news recaps and 15-minute model spotlights each week. Register and watch the livestream each Monday at 1:30pm ET https://aka.ms/model-mondays/rsvp Join the conversation on Discord each Friday at 1:30pm ET https://aka.ms/model-mondays/chat Catch up with replays, resources and more at any time on GitHub https://aka.ms/model-mondays The generative AI model landscape is getting increasingly crowded. It feels like there are new models being released daily, even before we've had time to understand what the existing set of models can do for us. There are over 1800 models today on the Azure AI Foundry model catalog - and over 1 million community-created model variants on the Hugging Face model hub. How do we deal with information overload, and combat decision fatigue in making model choices? This is where we hope Model Mondays can help! What are Model Mondays? Model Mondays is an 8-week power series where we cut through the noise and shine the spotlight on the most relevant models with the help of expert speakers and hands-on demos. Every Monday at 1:30pm ET we'll host a 30-minute livestream with a 5-min roundup of key news items from the previous week, followed by a 15-minute deep dive into a specific model or class of models. Register for the the next episode here. Have questions or want to share your own experiences or insights with those models? Join our #model-mondays channel on Discord where we will continue these conversations, wrapping up each week with a watercooler chat every Friday at 1:30pm ET. This is beginner-friendly and judgement-free zone where you can bring your questions or participate in show-and-tell demos! It's a chance for us to learn from each other and build on our collective knowledge. Join the #model-mondays channeon on Discord here What does the Spotlight cover? Generative AI models are rapidly expanding in scope not just in terms of model providers, but in terms of domain-specific tasks or model-related tooling to support efficient AI customization. The spotlight segment give you a 15-minute hands-on look at a specific model or class of models - helping you understand what it does, how it works, and when it is suitable for use. Our first season (8 episodes) will focus on model categories like reasoning models, visual generative AI, search & retrieval models, synthetic data generation models, forecasting models and more. We also look at model-driven tooling or open-source software that streamlines processes like fine-tuning, composability, testing and more. Our kickoff episode on March 10 will put the spotlight on the GitHub Model Marketplace and give you a chance to explore some of these models hands-on, with just a GitHub account. Along the way, you'll get to learn about some of the valuable features (like the prompt editor, model comparisons, and built-in code samples) that will get you from prompt to prototype within just a few minutes of exploration. What does the Roundup cover? The roundup segment provides a 5-minute news recap of key announcements from the previous week. Think of this as a "5 things to know" segment where you get a chance to learn about a new model or capability that is now available to you on the Azure AI Foundry model catalog. But wait, there's more. The Model Mondays Repo will have a dedicated page for each week's episode where we will collect a lot more links for all the interesting news and content that we heard about in the previous week, in the broader model ecosystem. We welcome your contributions. If you have a news item or project to share, let us know on Discord or add it to the relevant episode-specific issue in our repository. We'll review it and add it to the list if it meets the episode context. Be a part of the conversation! Watch Live on Microsoft Reactor – RSVP Now Join the AI community – Discord Office Hours every Friday – Join Here Get exclusive resources – Explore the GitHub Repo Don't fall behind! Jump in and level up your AI engineering skills with #ModelMondays View the full article
  9. Did you know that you can now add user login to app deployed on Azure, with just Bicep code? No Portal, CLI, SDK, or app code needed! For those new to Bicep, it's an "infrastructure-as-code" language that can describe all the Azure resources, their connections, and role-based permissions. It's similar to Terraform, but it's Azure-specific and compiles down to ARM JSON files. We encourage developers to use infrastructure-as-code (IaC), since you can then reliably setup the same resource configuration, store your setup in version control, and even programmatically audit your IaC for security issues. Microsoft recently announced a Graph extension that can create Graph resources, like Entra application registrations and service principals. Along with that, it's now possible for Entra applications to be secured using a managed identity as a federated identity credential ("MI as FIC"), which are simpler to manage and create than client secrets and certificates. You never have to worry about an app breaking in production due to a secret or certificate suddenly expiring. Both Azure Container Apps and App Service offer a built-in authentication feature, and they've now extended that feature so that it can be configured with an Entra application using MI as FIC, in either the Portal, CLI, or Bicep. 👀 The Graph extension, MI-as-FIC, and built-in auth support for MI-as-FIC are all currently in "public preview", which means they are subject to change based on community feedback. When we put all those new features, we now have a 100% Bicep solution for configuring built-in authentication! I've put together minimal templates here, which you can deploy and test for yourself: containerapps-builtinauth-bicep appservice-builtinauth-bicep In the rest of this post, I'll walk through the steps of adding this Bicep configuration to an existing application, for the many developers that are not starting from scratch. Enable the Graph extension The Graph extension requires the "extensions" functionality of Bicep, which was introduced in Bicep version 0.30.3 in September 2024. Add a bicepconfig.json file to your infrastructure folder with these contents: { "experimentalFeaturesEnabled": { "extensibility": true }, "extensions": { "microsoftGraphV1": "br:mcr.microsoft.com/bicep/extensions/microsoftgraph/v1.0:0.1.8-preview" } } If you do get an error about extensions not being understood, you may need to upgrade your Bicep CLI (if using it directly) or the Azure Developer CLI (if you're using "azd" instead). Prepare for Bicep changes Normally, when we provision resources in Bicep, we try to configure everything at once. However, for built-in auth, we need a three-step process, due to the dependencies involved: Create the backend application (either Container Apps or App Service Webapp) with an associated user-assigned managed identity Create the app registration with a reference to the backend application's managed identity Configure the backend application to use built-in auth with that app registration 1) Create the backend app Start with your usual Bicep for creating your backend app. Create a user-assigned managed identity for the backend: resource identity 'Microsoft.ManagedIdentity/userAssignedIdentities@2023-01-31' = { name: 'backend-app-identity' location: location } Associate that identity with the backend. For example, for Container Apps: resource app 'Microsoft.App/containerApps@2022-03-01' = { identity: { type: 'UserAssigned' userAssignedIdentities: { '${identity.id}': {} } } Store the client ID of the identity as a secret on the backend. For App Service, store the client ID in an environment variable named OVERRIDE_USE_MI_FIC_ASSERTION_CLIENTID. It should look something like this: appSettings: { OVERRIDE_USE_MI_FIC_ASSERTION_CLIENTID: identity.properties.clientId } For Container Apps, store the client ID in a secret named override-use-mi-fic-assertion-client-id. The exact Bicep depends on whether you're using the Container Apps Bicep module directly, or using a wrapper module. It should look something like this: secrets: [ { name: 'override-use-mi-fic-assertion-client-id' value: acaIdentity.properties.clientId } ] 2) Create the app registration The next step is to create an Entra application registration, along with a federated identity credential based on a managed identity ID, and a service principal representing the Entra app. Put all of this in a appregistration.bicep file that uses the microsoftGraphV1 extension: extension microsoftGraphV1 param issuer string param clientAppName string param clientAppDisplayName string param clientAppScopes array = ['User.Read', 'offline_access', 'openid', 'profile'] param webAppEndpoint string param webAppIdentityId string param serviceManagementReference string = '' param cloudEnvironment string = environment().name param audiences object = { AzureCloud: { uri: 'api://AzureADTokenExchange' } AzureUSGovernment: { uri: 'api://AzureADTokenExchangeUSGov' } AzureChinaCloud: { uri: 'api://AzureADTokenExchangeChina' } } // Get the MS Graph Service Principal based on its application ID: var msGraphAppId = '00000003-0000-0000-c000-000000000000' resource msGraphSP 'Microsoft.Graph/servicePrincipals@v1.0' existing = { appId: msGraphAppId } var graphScopes = msGraphSP.oauth2PermissionScopes resource clientApp 'Microsoft.Graph/applications@v1.0' = { uniqueName: clientAppName displayName: clientAppDisplayName signInAudience: 'AzureADMyOrg' serviceManagementReference: empty(serviceManagementReference) ? null : serviceManagementReference web: { redirectUris: [ '${webAppEndpoint}/.auth/login/aad/callback' ] implicitGrantSettings: { enableIdTokenIssuance: true } } requiredResourceAccess: [ { resourceAppId: msGraphAppId resourceAccess: [ for (scope, i) in clientAppScopes: { id: filter(graphScopes, graphScopes => graphScopes.value == scope)[0].id type: 'Scope' } ] } ] resource clientAppFic 'federatedIdentityCredentials@v1.0' = { name: '${clientApp.uniqueName}/miAsFic' audiences: [ audiences[cloudEnvironment].uri ] issuer: issuer subject: webAppIdentityId } } resource clientSp 'Microsoft.Graph/servicePrincipals@v1.0' = { appId: clientApp.appId } output clientAppId string = clientApp.appId output clientSpId string = clientSp.id Let's look at a few interesting lines in that module: signInAudience: 'AzureADMyOrg': This restricts the sign-in to your own organization. It's not currently possible to fully set up Entra External ID in Bicep. Check out this project for External ID setup with the Graph SDK. In addition, the MI+FIC approach can only be used for workforce tenants, not CIAM tenants. redirectUris: This matches the redirect URI of the built-in auth feature, ".auth/login/aad/callback". There is no need to specify a localhost redirect URI, since built-in auth only works on the deployed app. implicitGrantSettings: { enableIdTokenIssuance: true }: Along with the requiredResourceAccess, this grants the Entra application the permissions needed to do a user login flow, which uses the OpenID Connect protocol (OIDC) on top of OAuth2. With that module saved, now you can reference it from main.bicep, passing in the required parameters: var issuer = '${environment().authentication.loginEndpoint}${tenant().tenantId}/v2.0' module registration 'appregistration.bicep' = { name: 'reg' scope: resourceGroup params: { clientAppName: '${prefix}-entra-client-app' clientAppDisplayName: 'MyWebsite Entra Client App' issuer: issuer webAppEndpoint: backend.outputs.uri webAppIdentityId: backend.outputs.identityPrincipalId } } The issuer URL is constructed based off your environment's login endpoint and tenant ID, so that should not require changing. However, you'll need to make sure the following parameters are set correctly: webAppEndpoint: The full endpoint for the deployed application, including "https" protocol. webAppIdentityId: The principal ID of the managed identity associated with the deployed application. 3) Configure built-in authentication For the third and final step, you need to configure built-in authentication for your backend application, with a reference to that Entra application registration. The Bicep for configuration is slightly different across Container Apps and App Service, but they share properties in common: redirectToProvider: The value of 'azureactivedirectory' tells built-in auth to use Entra ID to handle the user login unauthenticatedClientAction: The value of 'RedirectToLoginPage' tells built-in auth to direct any unauthenticated users to the login page. identityProviders/azureActiveDirectory: These settings contain the reference to the Entra application registration, issuer URL, and the name of the app setting storing the managed identity client ID. For App Service, that setting must be 'OVERRIDE_USE_MI_FIC_ASSERTION_CLIENTID'. For Container apps, that setting must be 'override-use-mi-fic-assertion-client-id'. tokenStore: Whether the built-in auth feature should store tokens in a persistent storage. This is only needed if your app needs to access the access tokens itself, but not needed for the login flow itself. App Service comes with its own token store, but for a Container Apps token store, you must pass in a Blob storage account. For App Service, save this module in a file named builtinauth.bicep: param appServiceName string param clientId string param issuer string param includeTokenStore bool = false resource appService 'Microsoft.Web/sites@2022-03-01' existing = { name: appServiceName } resource configAuth 'Microsoft.Web/sites/config@2022-03-01' = { parent: appService name: 'authsettingsV2' properties: { globalValidation: { requireAuthentication: true unauthenticatedClientAction: 'RedirectToLoginPage' redirectToProvider: 'azureactivedirectory' } identityProviders: { azureActiveDirectory: { enabled: true registration: { clientId: clientId clientSecretSettingName: 'OVERRIDE_USE_MI_FIC_ASSERTION_CLIENTID' openIdIssuer: issuer } validation: { defaultAuthorizationPolicy: { allowedApplications: [] } } } } login: { tokenStore: { enabled: includeTokenStore } } } } For Container Apps, save this module in a file named builtinauth.bicep: param containerAppName string param clientId string param issuer string // Only needed if using a token store: param includeTokenStore bool = false param blobContainerUri string = '' param appIdentityResourceId string = '' resource app 'Microsoft.App/containerApps@2023-05-01' existing = { name: containerAppName } resource auth 'Microsoft.App/containerApps/authConfigs@2024-10-02-preview' = { parent: app name: 'current' properties: { platform: { enabled: true } globalValidation: { redirectToProvider: 'azureactivedirectory' unauthenticatedClientAction: 'RedirectToLoginPage' } identityProviders: { azureActiveDirectory: { enabled: true registration: { clientId: clientId clientSecretSettingName: 'override-use-mi-fic-assertion-client-id' openIdIssuer: issuer } validation: { defaultAuthorizationPolicy: { allowedApplications: [] } } } } login: { tokenStore: { enabled: includeTokenStore azureBlobStorage: includeTokenStore ? { blobContainerUri: blobContainerUri managedIdentityResourceId: appIdentityResourceId } : {} } } } } With that module saved, reference it from main.bicep, passing in the required parameters: module builtinauth 'builtinauth.bicep' = { name: 'builtinauth' scope: resourceGroup params: { containerAppName: backend.outputs.name clientId: registration.outputs.clientAppId openIdIssuer: issuer includeTokenStore: false } } All together now For an example of making those changes to a project, check out this pull request where I added built-in auth to an existing Azure Container app. Or you can check out my minimal templates for built-in auth, for Container Apps or App Service. ⚠️ Keep in mind the current limitations to this approach (as of February 2025): When we run the app locally, it will not have a user login flow. That should be fine if you're only using user login to restrict access to the app, but will make development more difficult if you have features that rely on the details of logged in users, like their Entra ID. For local development, you would need to use the MSAL SDK in your language of choice, and you would need to secure the Entra application registration with either a secret or certificate, since your local server would not have a managed identity to use as the credential. If you are trying to use Entra External ID, you cannot yet configure everything needed using the Graph Bicep extension. You would need to set up External ID with either the Graph SDK, as we do in this project, or in the Portal. The Graph extension, MI-as-FIC, and built-in auth support for MI-as-FIC are all currently in "public preview", which means they are subject to change based on community feedback. This is a great solution if you are deploying apps for your organization and want to ensure that that only your organization user's can see them! You should never rely on "security by obscurity" - assuming that a public endpoint won't get accessed by unauthorized users. Always protect your endpoints, either with user login, private networks, or both. To a more secure future! 🔐 View the full article
  10. I have a Dell G15 15.6" FHD 120Hz Gaming Laptop - Intel Core i7 16GB Memory - NVIDIA GeForce RTX 4060, WIN 11. Had it a few months, bought new. It used to detect my headphones when I plugged them in but now it doesn't, but if I restart the laptop it detects them with no issue. Headphones are the only thing I use for audio so usually its not an issue, unless they get pulled out accidently or when I put it away when I leave house, then when I return I have to remember to plug headphones in FIRST before I start up the laptop, which I never do. So this issue is a mild pain in my butt. Tried reinstalling drivers and made sure they are up to date from Dell site. The site has an auto-detect driver thingie that shows they are all up to date. Any ideas? View the full article
  11. Recently, after upgrading to Windows 11, I found that the snipping tool that comes with Windows 11 is particularly unpleasant to use. Sometimes you can't find where to save the screenshot, and the editing function is very simple, even the basic cropping has to be handled by another software. As an office worker who often needs to make document records and meeting minutes, a smooth and good screenshot tool is really too important! Mainly hope to meet the following requirements: 1️⃣ operation is simple and fast (preferably can be customized shortcut keys) 2️⃣ support screenshot directly after editing labeling 3️⃣ save file path easy to modify 4️⃣ Doesn't take up too much system resources. If there is a video recording + screenshot function at the same time, it would be better! I'm currently using the free snipping tool as a temporary emergency, but it doesn't feel professional enough. Which is the best snipping tool for Windows 11 that you are using? Any hidden gems you'd recommend? View the full article
  12. 2024-09 Cumulative Update for Windows 11 Version 22H2 for x64-based Systems (KB5043076) It just won't install; apparently it has tried every day for a while. The error code is 0x800736b3. I am not a W11 expert by any stretch, and frankly not sure what to do. What is kind of weird is that another version(?) is available (same title, KB5043145). This home built PC originally had W10, and I upgraded years ago without any issues. View the full article
  13. Hi everyone, The purpose of the Document ID feature in SharePoint is to create durable links, but what is the intended way to generate and copy those links efficiently? Most common link creation methods such as the "Create Link" button in a SharePoint record still generate path-based links even with Document ID enabled. Even using the Document ID column doesn’t provide a direct way to copy the Doc ID URL, as clicking it simply redirects back to a path-based link. The only way I’ve found to copy a Document ID link is: Go to the SharePoint libraryRight-click the recordOpen the details paneRight-click and copy the Document ID URLThis method is cumbersome and impractical, especially for synced files. As a result, users will likely default to copying path-based links, which defeats the purpose of durable Doc ID links. Has anyone found a better way to easily generate and copy Document ID links without extra steps? It seems like this issue has been raised for years without a proper solution. Thanks! View the full article
  14. Hello. I have an ADF Dataflow which has two sources, a blob container with JSON files and an Azure SQL table. The sink is the same SQL table as the SQL source, the idea being to conditionally insert new rows, update rows with a later modified date in the JSON source or do nothing if the ID exists in the SQL table with the same modified date. In the Dataflow I join the rows on id, which is unique in both sources, and then use an Alter row action to insert if the id column from the SQL source is null, update if it's not null but the last updated timestamp in the JSON source is newer, or delete if the last updated timestamp in the JSON source is the same or older (delete is not permitted in the sink settings so that should ignore/do nothing). The problem I'm having is I get a primary key violation error when running the Dataflow as it's trying to insert rows that already exist: For example in my run history (160806 is the minimum value for ID in the SQL database): So for troubleshooting I put a filter directly after each source for that ticket ID so when I'm debugging I only see that single row. Now here is the configuration of my Alter row action: It should insert only if the SQLTickets id column is null, but here in the data preview from the same Alter rows action. It's marked as an insert, despite the id column from both sources clearly having a value: However, when I do a data preview in the expression builder itself, it correctly evaluates to false: I'm so confused. I've used this technique in other Dataflows without any issues so I really have no idea what's going on here. I've been troubleshooting it for days without any result. I've even tried putting a filter after the Alter row action to explicitly filter out rows where the SQL id column is not null and the timestamps are the same. The data preview shows them filtered out but yet it still tries to insert the rows it should be ignoring or updating anyway when I do a test run. What am I doing wrong here? View the full article
  15. Freedom of information is fundamental to a thriving and transparent society. Restricting information can have severe consequences, undermining individuals, societies, and even the global community. You Erode trust when Co-Pilot attempts to sugar coat or completely restrict facts and history based on some developer's ideology. In the Absence of verified and relevant facts, however abrasive they may be, rumors, conspiracy theories, and false narratives fill the void. This leads to confusion and polarization, making it harder to address the real issues.Restricting Facts limits Opportunities for breakthroughs in science, technology, medicine, and education. Stifling Progress!Censorship of Facts only serves those in power, enabling corruption and oppression. It Silences dissent, reduces accountability, and weakens democracy. You are Empowering Authoritarianism by doing this.When researching famous historical figures such as Aristotle Co-Pilot restricted the information declaring it violated "It's Intent" to provide safe and quality content. Saying that the quotes of Aristotle were demeaning towards women in his hierarchical view on genders. Personally, this provokes anger in me, in that your developers seem to believe that most individuals using this platform can not be trusted with historical FACTS. In an attempt to force their ideological views on the masses they place guard rails around facts they do not personally agree with. Now these facts can still be accessed given the right prompting, but the most direct route to this information is blocked. My suggestion would be to stop invoking some fake moral obligations to protect people from themselves, when in fact you are attempting to manipulate facts to your own preferences. We, the consumer, are not all children and should not be treated as such by your AI. View the full article
  16. Nonprofit organizations flourish when they create links between different generations​ ​Ask yourself, “What are the best ways to engage younger supporters to keep them active with our organization?”​ ​Younger generations offer new viewpoints, digital expertise, and a strong desire for social change, yet their engagement preferences vary from those of previous generations. There are numerous methods to keep them active, ranging from engaging social media initiatives to practical volunteer opportunities and creative fundraising approaches.​ ​SHARE your best strategies, success stories, or creative ideas. View the full article
  17. Hello Mct Community. My question is about those of us whose MCT expires before July 2025. Should we receive the renewal link 90 days before, okay? If we don't receive it, does anyone know how to make a claim? Thank you very much. View the full article
  18. i try to login into cloudflare https://dash.cloudflare.com/login but the captcha enters infinite bug, on edge stable i can fill the captcha properly, on previous edge canary it also worked fine. View the full article
  19. A robust PostgreSQL development ecosystem is essential for the success of Azure Database for PostgreSQL. Beyond substantial engineering and product initiatives on the managed service side, Microsoft has invested in the PostgreSQL Open Source (OSS) engine team. This team is comprised of code contributors and committers to the upstream PostgreSQL open-source project, aiming to ensure that development is well-funded, healthy, and thriving. In this first part of a two part blog post, you will learn about who the Microsoft PostgreSQL OSS Engine team is, their code contributions to upstream PostgreSQL & their journey during 2024. In the second part, you will get a sneak preview of upcoming work in 2025 (and PG18 cycle) and more. Here are quick pointers of what is in store for you: Meet our team What does our team do? The village beyond our team What are the team's recent contributions? Async IO – read stream IO Combining UNION & IS [NOT] NULL query planner improvements VACUUM WAL volume reduction and performance improvements Libpq performance and cancellation Partitioned tables and query planner improvements Memory performance enhancements PG upgrade optimization Developer tool See you soon Meet Our Team The Microsoft PostgreSQL OSS engine team already had an impressive set of team members: Andres Freund Daniel Gustafsson David Rowley Melanie Plageman Mustafa Melih Mutlu Nazır Bilal Yavuz Thomas Munro In 2024 awesome upstream code contributors and committers Amit Langote Ashutosh Bapat Rahila Syed Tomas Vondra joined our group making our team even more well-rounded and versatile. Microsoft PostgreSQL OSS Engine Team What does our team do? Our team actively contributes to various PostgreSQL development projects and plays a leading or co-leading role in significant projects. Additionally, we participate in numerous initiatives aimed at enhancing PostgreSQL code quality and improving the development process. Examples of team's work includes but not limited to: Modernizing PostgreSQL APIs Improving the build system CI/CD enhancements Handling bug reports, and Addressing reported performance regressions, and more. Regular activities also involve engagement with other developers, design reviews & discussions, code reviews, and testing patches. The team allocates considerable resources to projects that intersect community interests, contributor interests, and user/customer interests. In addition to significant upstream code work, the team has also made notable contributions to the community by delivering numerous talks, organizing events, and serving on community committees. The village beyond our team PostgreSQL development fundamentally relies on teamwork, making close collaboration and partnership central aspects for our team. Every patch that merges upstream undergoes a rigorous review and vetting process from core PostgreSQL developers who are often from different companies, different countries, and different cultures—involving in-depth discussion, review, and testing on and off the pgsql-hackers mailing list. This article outlines contributions from the perspective of our team. However, it is essential to recognize that the support, review, diligence, and collaboration from numerous core PostgreSQL developers beyond Microsoft were critical for the acceptance of patches into upstream PostgreSQL. What are the team's recent contributions? The PostgreSQL development cycle lasts for a year with a major version releasing every year. PostgreSQL 17 was released in September 2024. Below are some areas which our team made significant contributions to PG17. Async IO – read stream Adding Async IO and Direct IO has been a long running project led by engineers from our team with involvement and participation from the community. You can read about the evolution of this project led by Andres Freund’s in his talk The path to using AIO in postgres. In PG17 the AIO project took a huge step by adding a read stream interface. This work led by Thomas Munro paves way to add AIO implementations (e.g.: io_uring in Linux) without making changes to the users of this interface in the upcoming releases. It can also use read-ahead advice to drive buffered I/O concurrency in a systematic and centralized way, in preparation for later work on asynchronous I/O. In addition to the streaming read interface, some users of this interface such as pg_prewarm (Nazır Bilal Yavuz), sequential scan (Melanie Plageman) and ANALYZE (Nazır Bilal Yavuz) were part of PG17 as well. You can find more details on this work here: Streaming I/O and vectored I/O (PG Conf EU 2024). IO combining Until PG17, PostgreSQL would use single 8K reads when reading data from disk. With sequential read using the read stream interface, vectored read is used when possible thereby consuming multiple 8K pages at the same time. This project was primarily led by Thomas Munro with collaboration across our team and community. On Linux, PG uses preadv instead of pread for cases where it can accumulate the reads in sequential fashion. Below is a screenshot of PG16 sequential scan, with the top part displaying the SQL query being executed and the bottom part showing the strace output that indicates the system calls made by the Postgres process while executing the query. As you can see, the reads are performed as single 8K reads. PG16: IO not combined, 8K reads The figure below shows the same on PG17 with IO combining in action. The resultant I/O calls combine multiple 8K reads into one system call. PG17: IO combining in action UNION & IS [NOT] NULL query planner improvements Before PG17 the planner had to append the sub query results at the top level. This would lead to suboptimal planning. Changes in PG17 adjust the UNION planner to instruct the child planner nodes to provide a presorted input. The child node could then choose the most optimal ways (e.g., indexes) to sort resulting in performance improvements. These patches were contributed by David Rowley and you can find more here: Allow planner to use Merge Append to efficiently implement UNION. Below you can see how for a simple table the PG16 UNION query would use sequential scan, while in the PG17 it would use the index at the child nodes of the UNION query. [PG16]$ psql -d postgres psql (16.8) Type "help" for help. postgres=# CREATE TABLE numbers (num int); CREATE TABLE postgres=# CREATE UNIQUE INDEX num_idx ON numbers(num); CREATE INDEX postgres=# INSERT INTO numbers(num) SELECT * FROM generate_series(1, 1000000); INSERT 0 1000000 postgres=# EXPLAIN (COSTS OFF) SELECT num FROM numbers UNION SELECT num FROM numbers; QUERY PLAN ------------------------------------------------- Unique -> Sort Sort Key: numbers.num -> Append -> Seq Scan on numbers -> Seq Scan on numbers numbers_1 (6 rows) psql -d postgres [PG17]$ psql -d postgres psql (17.4) Type "help" for help. postgres=# CREATE TABLE numbers (num int); CREATE TABLE postgres=# CREATE UNIQUE INDEX num_idx ON numbers(num); CREATE INDEX postgres=# INSERT INTO numbers(num) SELECT * FROM generate_series(1, 1000000); INSERT 0 1000000 postgres=# EXPLAIN (COSTS OFF) SELECT num FROM numbers UNION SELECT num FROM numbers; QUERY PLAN ---------------------------------------------------------------- Unique -> Merge Append Sort Key: numbers.num -> Index Only Scan using num_idx on numbers -> Index Only Scan using num_idx on numbers numbers_1 (5 rows) Another query layer improvement was w.r.t to handling NULL constraints. The previous planner would always produce a plan resulting in evaluation of NULL/IS NOT NULL qualifications, regardless of if the given column had a NOT NULL constraint or not. However, with PG17, the planner now optimizes by considering NOT NULL constraints. This can mean redundant qualifications (e.g., IS NOT NULL on a NOT NULL column) can be ignored and impossible qualifications (e.g., IS NULL on a NOT NULL column) can prevent scans entirely. You can find more details of these changes from the merged patches from David Rowley here: Add better handling of redundant IS [NOT] NULL quals. VACUUM WAL volume reduction and performance improvements In PG17, because of the work done by Melanie Plageman, VACUUM pruning and freezing have been combined. This makes VACUUM faster by reducing the time it takes to emit and replay WAL. Further this also results in generating less WAL thereby saving storage space. Here is a screen shot of WAL inspect showing the differences in records generated: PG16: Two separate WAL records are generated.PG17: Only one WAL record is generated for pruning and freezing Libpq performance and cancellation There are changes in PG17 to reduce the memory copies made during the operations such as COPY TO STDOUT and pg_basebackup. This work was spun off from the project to improve physical replication performance and Mustafa Melih Mutlu was behind these contributions. Additionally, changes from Daniel Gustafsson allows asynchronous cancel to avoid blocking cancel calls on the client front in PG17. Partitioned tables and query planner improvements The Bitmapset data structure in PostgreSQL is used heavily by the query planner. In PG17, David Rowley committed a change to modify Bitmapset so that trailing zero words are never stored. This allows short-circuiting of various Bitmapset operations. For example, s1 cannot be a subset of s2 if s1 contains more words. This change helped speed up query planning for queries with partitioned tables having a large number of partitions. The following patch from David Rowley made this possible: Remove trailing zero words from Bitmapsets. Memory performance enhancements In PG17 a change was introduced to separate out hot and cold paths during the memory allocation, and run the hot path in a way which reduces the need to setup stack frame, thereby leading to optimizations. You can find details of this change from David Rowley here: Refactor AllocSetAlloc(), separating hot and cold paths. Bump memory context adds an optimized memory context implementation to be used in sorting tuples by removing some bookkeeping which is not typically needed for such scenarios. For example, it removes the header which is used for freeing the memory chunks since only reset of the entire context is needed when sort is used. This reduces the memory usage in sort & incremental sort. The patch from David Rowley on this can be found here Introduce a bump memory allocator. PG upgrade optimization The checks for datatype usage during upgrade were improved by using a single connection for check which validates data type. Previously the checks for datatype were connecting separately to each of the databases. This change was introduced by Daniel Gustafsson. Developer tool As part of the memory plasticity efforts details of which you can be find in the talk here: Enhancing PostgreSQL Plasticity, we had an intern project kicked off. The intern project led to upstream contribution in the form of pg_buffercache_evict tool which has become a very handy tool when operating on buffer pool. Palak Chaturvedi produced the initial version of this patch with guidance from Thomas Munro and then Thomas took it through the finish line. Details of patch can be found here: Add pg_buffercache_evict() function for testing. See you soon With this we conclude the first part of the blog which reflects on the journey of the Microsoft PostgreSQL OSS engine team through 2024. The second part will come out soon which will take you through what is in store during 2025, the PG18 cycle and more. See you soon with the second part: “Microsoft PostgreSQL OSS engine team: previewing 2025”. Max file size: 75 MBMax attachments: 5 View the full article
  20. Description: In this webinar, learn how to set up and develop the new Azure Container Offer used to deploy containerized solutions as Kubernetes Apps from the Azure Marketplace. Presented by: David Starr- Principal Software Engineer, Microsoft Register hereView the full article
  21. Description: In this session we will review the required technical configurations to make Virtual Machine apps and how to publish virtual machines offers to the Azure marketplace. Looking for additional guidance with Virtual Machines? An Azure technical expert will take you through: A brief overview of what a virtual Machine offer type is. How to publish a Virtual machine offer and integrate the solution from the Azure Portal tool to Partner center. How to setup Tenants. How to create different plans to best suit your customers’ needs How to use Cloud-init within the Azure Portal. Presented by: Neelavarsha Mahesh- Software Engineer, Microsoft Register hereView the full article
  22. Description: This session will show how SaaS accelerator project can help partners go to market quicker by accelerating creating the technical implementation to publish their transactable SaaS offers on Azure Marketplace. Session covers Overview of SaaS offer and technical requirements Overview of SaaS Accelerator code base Deploying SaaS Accelerator - Demo Presented by: Santhosh Bomma- Senior Software Engineer, Microsoft Register hereView the full article
  23. Description: Marketplace Rewards is part of ISV Success and offers sales and marketing benefits to help ISVs accelerate application sales on the Microsoft commercial marketplace. Join this session to learn about how to transform your approach, and elevate your business in the competitive market landscape along with: The availability and eligibility requirements for Marketplace Rewards Marketplace Rewards’ tier-based model that is based on marketplace performance (Marketplace billed sales, solution value or Teams App monthly active users) Partner success with Marketplace Rewards and the ROI of activating benefits Enhanced Marketing Efforts: Understand how integrating these benefits can enhance your marketing efforts, enabling you to reach a wider audience and create impactful marketing campaigns. Gain insights on optimizing the unique benefits offered by Marketplace Rewards to enhance your market presence and boost your business performance, including Azure Sponsorship Presented by: Luxmi Nagaraj- Senior Technical Program Manager, Microsoft Register hereView the full article
  24. Description: "In this session you will learn about the payouts process lifecycle for the Microsoft Commercial Marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. Join this session to learn about the payouts process within Azure Marketplace. We will review the following topics: The payouts process lifecycle for the Azure Marketplace How to register and the registration requirements General payout processes from start to finish How to view and access payout reporting" Presented by: David Najour- Senior Business Operations Manager, Microsoft Register hereView the full article
  25. Description: In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement the following: SaaS landing page Webhook to subscribe to change events Integrating your SaaS product into the marketplace And more! Presented by: Santhosh Bomma- Senior Software Engineer, Microsoft Register hereView the full article
×
×
  • Create New...