Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5710
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Hi everyone, The purpose of the Document ID feature in SharePoint is to create durable links, but what is the intended way to generate and copy those links efficiently? Most common link creation methods such as the "Create Link" button in a SharePoint record still generate path-based links even with Document ID enabled. Even using the Document ID column doesn’t provide a direct way to copy the Doc ID URL, as clicking it simply redirects back to a path-based link. The only way I’ve found to copy a Document ID link is: Go to the SharePoint libraryRight-click the recordOpen the details paneRight-click and copy the Document ID URLThis method is cumbersome and impractical, especially for synced files. As a result, users will likely default to copying path-based links, which defeats the purpose of durable Doc ID links. Has anyone found a better way to easily generate and copy Document ID links without extra steps? It seems like this issue has been raised for years without a proper solution. Thanks! View the full article
  2. Hello. I have an ADF Dataflow which has two sources, a blob container with JSON files and an Azure SQL table. The sink is the same SQL table as the SQL source, the idea being to conditionally insert new rows, update rows with a later modified date in the JSON source or do nothing if the ID exists in the SQL table with the same modified date. In the Dataflow I join the rows on id, which is unique in both sources, and then use an Alter row action to insert if the id column from the SQL source is null, update if it's not null but the last updated timestamp in the JSON source is newer, or delete if the last updated timestamp in the JSON source is the same or older (delete is not permitted in the sink settings so that should ignore/do nothing). The problem I'm having is I get a primary key violation error when running the Dataflow as it's trying to insert rows that already exist: For example in my run history (160806 is the minimum value for ID in the SQL database): So for troubleshooting I put a filter directly after each source for that ticket ID so when I'm debugging I only see that single row. Now here is the configuration of my Alter row action: It should insert only if the SQLTickets id column is null, but here in the data preview from the same Alter rows action. It's marked as an insert, despite the id column from both sources clearly having a value: However, when I do a data preview in the expression builder itself, it correctly evaluates to false: I'm so confused. I've used this technique in other Dataflows without any issues so I really have no idea what's going on here. I've been troubleshooting it for days without any result. I've even tried putting a filter after the Alter row action to explicitly filter out rows where the SQL id column is not null and the timestamps are the same. The data preview shows them filtered out but yet it still tries to insert the rows it should be ignoring or updating anyway when I do a test run. What am I doing wrong here? View the full article
  3. Freedom of information is fundamental to a thriving and transparent society. Restricting information can have severe consequences, undermining individuals, societies, and even the global community. You Erode trust when Co-Pilot attempts to sugar coat or completely restrict facts and history based on some developer's ideology. In the Absence of verified and relevant facts, however abrasive they may be, rumors, conspiracy theories, and false narratives fill the void. This leads to confusion and polarization, making it harder to address the real issues.Restricting Facts limits Opportunities for breakthroughs in science, technology, medicine, and education. Stifling Progress!Censorship of Facts only serves those in power, enabling corruption and oppression. It Silences dissent, reduces accountability, and weakens democracy. You are Empowering Authoritarianism by doing this.When researching famous historical figures such as Aristotle Co-Pilot restricted the information declaring it violated "It's Intent" to provide safe and quality content. Saying that the quotes of Aristotle were demeaning towards women in his hierarchical view on genders. Personally, this provokes anger in me, in that your developers seem to believe that most individuals using this platform can not be trusted with historical FACTS. In an attempt to force their ideological views on the masses they place guard rails around facts they do not personally agree with. Now these facts can still be accessed given the right prompting, but the most direct route to this information is blocked. My suggestion would be to stop invoking some fake moral obligations to protect people from themselves, when in fact you are attempting to manipulate facts to your own preferences. We, the consumer, are not all children and should not be treated as such by your AI. View the full article
  4. Nonprofit organizations flourish when they create links between different generations​ ​Ask yourself, “What are the best ways to engage younger supporters to keep them active with our organization?”​ ​Younger generations offer new viewpoints, digital expertise, and a strong desire for social change, yet their engagement preferences vary from those of previous generations. There are numerous methods to keep them active, ranging from engaging social media initiatives to practical volunteer opportunities and creative fundraising approaches.​ ​SHARE your best strategies, success stories, or creative ideas. View the full article
  5. Hello Mct Community. My question is about those of us whose MCT expires before July 2025. Should we receive the renewal link 90 days before, okay? If we don't receive it, does anyone know how to make a claim? Thank you very much. View the full article
  6. i try to login into cloudflare https://dash.cloudflare.com/login but the captcha enters infinite bug, on edge stable i can fill the captcha properly, on previous edge canary it also worked fine. View the full article
  7. A robust PostgreSQL development ecosystem is essential for the success of Azure Database for PostgreSQL. Beyond substantial engineering and product initiatives on the managed service side, Microsoft has invested in the PostgreSQL Open Source (OSS) engine team. This team is comprised of code contributors and committers to the upstream PostgreSQL open-source project, aiming to ensure that development is well-funded, healthy, and thriving. In this first part of a two part blog post, you will learn about who the Microsoft PostgreSQL OSS Engine team is, their code contributions to upstream PostgreSQL & their journey during 2024. In the second part, you will get a sneak preview of upcoming work in 2025 (and PG18 cycle) and more. Here are quick pointers of what is in store for you: Meet our team What does our team do? The village beyond our team What are the team's recent contributions? Async IO – read stream IO Combining UNION & IS [NOT] NULL query planner improvements VACUUM WAL volume reduction and performance improvements Libpq performance and cancellation Partitioned tables and query planner improvements Memory performance enhancements PG upgrade optimization Developer tool See you soon Meet Our Team The Microsoft PostgreSQL OSS engine team already had an impressive set of team members: Andres Freund Daniel Gustafsson David Rowley Melanie Plageman Mustafa Melih Mutlu Nazır Bilal Yavuz Thomas Munro In 2024 awesome upstream code contributors and committers Amit Langote Ashutosh Bapat Rahila Syed Tomas Vondra joined our group making our team even more well-rounded and versatile. Microsoft PostgreSQL OSS Engine Team What does our team do? Our team actively contributes to various PostgreSQL development projects and plays a leading or co-leading role in significant projects. Additionally, we participate in numerous initiatives aimed at enhancing PostgreSQL code quality and improving the development process. Examples of team's work includes but not limited to: Modernizing PostgreSQL APIs Improving the build system CI/CD enhancements Handling bug reports, and Addressing reported performance regressions, and more. Regular activities also involve engagement with other developers, design reviews & discussions, code reviews, and testing patches. The team allocates considerable resources to projects that intersect community interests, contributor interests, and user/customer interests. In addition to significant upstream code work, the team has also made notable contributions to the community by delivering numerous talks, organizing events, and serving on community committees. The village beyond our team PostgreSQL development fundamentally relies on teamwork, making close collaboration and partnership central aspects for our team. Every patch that merges upstream undergoes a rigorous review and vetting process from core PostgreSQL developers who are often from different companies, different countries, and different cultures—involving in-depth discussion, review, and testing on and off the pgsql-hackers mailing list. This article outlines contributions from the perspective of our team. However, it is essential to recognize that the support, review, diligence, and collaboration from numerous core PostgreSQL developers beyond Microsoft were critical for the acceptance of patches into upstream PostgreSQL. What are the team's recent contributions? The PostgreSQL development cycle lasts for a year with a major version releasing every year. PostgreSQL 17 was released in September 2024. Below are some areas which our team made significant contributions to PG17. Async IO – read stream Adding Async IO and Direct IO has been a long running project led by engineers from our team with involvement and participation from the community. You can read about the evolution of this project led by Andres Freund’s in his talk The path to using AIO in postgres. In PG17 the AIO project took a huge step by adding a read stream interface. This work led by Thomas Munro paves way to add AIO implementations (e.g.: io_uring in Linux) without making changes to the users of this interface in the upcoming releases. It can also use read-ahead advice to drive buffered I/O concurrency in a systematic and centralized way, in preparation for later work on asynchronous I/O. In addition to the streaming read interface, some users of this interface such as pg_prewarm (Nazır Bilal Yavuz), sequential scan (Melanie Plageman) and ANALYZE (Nazır Bilal Yavuz) were part of PG17 as well. You can find more details on this work here: Streaming I/O and vectored I/O (PG Conf EU 2024). IO combining Until PG17, PostgreSQL would use single 8K reads when reading data from disk. With sequential read using the read stream interface, vectored read is used when possible thereby consuming multiple 8K pages at the same time. This project was primarily led by Thomas Munro with collaboration across our team and community. On Linux, PG uses preadv instead of pread for cases where it can accumulate the reads in sequential fashion. Below is a screenshot of PG16 sequential scan, with the top part displaying the SQL query being executed and the bottom part showing the strace output that indicates the system calls made by the Postgres process while executing the query. As you can see, the reads are performed as single 8K reads. PG16: IO not combined, 8K reads The figure below shows the same on PG17 with IO combining in action. The resultant I/O calls combine multiple 8K reads into one system call. PG17: IO combining in action UNION & IS [NOT] NULL query planner improvements Before PG17 the planner had to append the sub query results at the top level. This would lead to suboptimal planning. Changes in PG17 adjust the UNION planner to instruct the child planner nodes to provide a presorted input. The child node could then choose the most optimal ways (e.g., indexes) to sort resulting in performance improvements. These patches were contributed by David Rowley and you can find more here: Allow planner to use Merge Append to efficiently implement UNION. Below you can see how for a simple table the PG16 UNION query would use sequential scan, while in the PG17 it would use the index at the child nodes of the UNION query. [PG16]$ psql -d postgres psql (16.8) Type "help" for help. postgres=# CREATE TABLE numbers (num int); CREATE TABLE postgres=# CREATE UNIQUE INDEX num_idx ON numbers(num); CREATE INDEX postgres=# INSERT INTO numbers(num) SELECT * FROM generate_series(1, 1000000); INSERT 0 1000000 postgres=# EXPLAIN (COSTS OFF) SELECT num FROM numbers UNION SELECT num FROM numbers; QUERY PLAN ------------------------------------------------- Unique -> Sort Sort Key: numbers.num -> Append -> Seq Scan on numbers -> Seq Scan on numbers numbers_1 (6 rows) psql -d postgres [PG17]$ psql -d postgres psql (17.4) Type "help" for help. postgres=# CREATE TABLE numbers (num int); CREATE TABLE postgres=# CREATE UNIQUE INDEX num_idx ON numbers(num); CREATE INDEX postgres=# INSERT INTO numbers(num) SELECT * FROM generate_series(1, 1000000); INSERT 0 1000000 postgres=# EXPLAIN (COSTS OFF) SELECT num FROM numbers UNION SELECT num FROM numbers; QUERY PLAN ---------------------------------------------------------------- Unique -> Merge Append Sort Key: numbers.num -> Index Only Scan using num_idx on numbers -> Index Only Scan using num_idx on numbers numbers_1 (5 rows) Another query layer improvement was w.r.t to handling NULL constraints. The previous planner would always produce a plan resulting in evaluation of NULL/IS NOT NULL qualifications, regardless of if the given column had a NOT NULL constraint or not. However, with PG17, the planner now optimizes by considering NOT NULL constraints. This can mean redundant qualifications (e.g., IS NOT NULL on a NOT NULL column) can be ignored and impossible qualifications (e.g., IS NULL on a NOT NULL column) can prevent scans entirely. You can find more details of these changes from the merged patches from David Rowley here: Add better handling of redundant IS [NOT] NULL quals. VACUUM WAL volume reduction and performance improvements In PG17, because of the work done by Melanie Plageman, VACUUM pruning and freezing have been combined. This makes VACUUM faster by reducing the time it takes to emit and replay WAL. Further this also results in generating less WAL thereby saving storage space. Here is a screen shot of WAL inspect showing the differences in records generated: PG16: Two separate WAL records are generated.PG17: Only one WAL record is generated for pruning and freezing Libpq performance and cancellation There are changes in PG17 to reduce the memory copies made during the operations such as COPY TO STDOUT and pg_basebackup. This work was spun off from the project to improve physical replication performance and Mustafa Melih Mutlu was behind these contributions. Additionally, changes from Daniel Gustafsson allows asynchronous cancel to avoid blocking cancel calls on the client front in PG17. Partitioned tables and query planner improvements The Bitmapset data structure in PostgreSQL is used heavily by the query planner. In PG17, David Rowley committed a change to modify Bitmapset so that trailing zero words are never stored. This allows short-circuiting of various Bitmapset operations. For example, s1 cannot be a subset of s2 if s1 contains more words. This change helped speed up query planning for queries with partitioned tables having a large number of partitions. The following patch from David Rowley made this possible: Remove trailing zero words from Bitmapsets. Memory performance enhancements In PG17 a change was introduced to separate out hot and cold paths during the memory allocation, and run the hot path in a way which reduces the need to setup stack frame, thereby leading to optimizations. You can find details of this change from David Rowley here: Refactor AllocSetAlloc(), separating hot and cold paths. Bump memory context adds an optimized memory context implementation to be used in sorting tuples by removing some bookkeeping which is not typically needed for such scenarios. For example, it removes the header which is used for freeing the memory chunks since only reset of the entire context is needed when sort is used. This reduces the memory usage in sort & incremental sort. The patch from David Rowley on this can be found here Introduce a bump memory allocator. PG upgrade optimization The checks for datatype usage during upgrade were improved by using a single connection for check which validates data type. Previously the checks for datatype were connecting separately to each of the databases. This change was introduced by Daniel Gustafsson. Developer tool As part of the memory plasticity efforts details of which you can be find in the talk here: Enhancing PostgreSQL Plasticity, we had an intern project kicked off. The intern project led to upstream contribution in the form of pg_buffercache_evict tool which has become a very handy tool when operating on buffer pool. Palak Chaturvedi produced the initial version of this patch with guidance from Thomas Munro and then Thomas took it through the finish line. Details of patch can be found here: Add pg_buffercache_evict() function for testing. See you soon With this we conclude the first part of the blog which reflects on the journey of the Microsoft PostgreSQL OSS engine team through 2024. The second part will come out soon which will take you through what is in store during 2025, the PG18 cycle and more. See you soon with the second part: “Microsoft PostgreSQL OSS engine team: previewing 2025”. Max file size: 75 MBMax attachments: 5 View the full article
  8. Description: In this webinar, learn how to set up and develop the new Azure Container Offer used to deploy containerized solutions as Kubernetes Apps from the Azure Marketplace. Presented by: David Starr- Principal Software Engineer, Microsoft Register hereView the full article
  9. Description: In this session we will review the required technical configurations to make Virtual Machine apps and how to publish virtual machines offers to the Azure marketplace. Looking for additional guidance with Virtual Machines? An Azure technical expert will take you through: A brief overview of what a virtual Machine offer type is. How to publish a Virtual machine offer and integrate the solution from the Azure Portal tool to Partner center. How to setup Tenants. How to create different plans to best suit your customers’ needs How to use Cloud-init within the Azure Portal. Presented by: Neelavarsha Mahesh- Software Engineer, Microsoft Register hereView the full article
  10. Description: This session will show how SaaS accelerator project can help partners go to market quicker by accelerating creating the technical implementation to publish their transactable SaaS offers on Azure Marketplace. Session covers Overview of SaaS offer and technical requirements Overview of SaaS Accelerator code base Deploying SaaS Accelerator - Demo Presented by: Santhosh Bomma- Senior Software Engineer, Microsoft Register hereView the full article
  11. Description: Marketplace Rewards is part of ISV Success and offers sales and marketing benefits to help ISVs accelerate application sales on the Microsoft commercial marketplace. Join this session to learn about how to transform your approach, and elevate your business in the competitive market landscape along with: The availability and eligibility requirements for Marketplace Rewards Marketplace Rewards’ tier-based model that is based on marketplace performance (Marketplace billed sales, solution value or Teams App monthly active users) Partner success with Marketplace Rewards and the ROI of activating benefits Enhanced Marketing Efforts: Understand how integrating these benefits can enhance your marketing efforts, enabling you to reach a wider audience and create impactful marketing campaigns. Gain insights on optimizing the unique benefits offered by Marketplace Rewards to enhance your market presence and boost your business performance, including Azure Sponsorship Presented by: Luxmi Nagaraj- Senior Technical Program Manager, Microsoft Register hereView the full article
  12. Description: "In this session you will learn about the payouts process lifecycle for the Microsoft Commercial Marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. Join this session to learn about the payouts process within Azure Marketplace. We will review the following topics: The payouts process lifecycle for the Azure Marketplace How to register and the registration requirements General payout processes from start to finish How to view and access payout reporting" Presented by: David Najour- Senior Business Operations Manager, Microsoft Register hereView the full article
  13. Description: In this technical session, learn how to implement the components of a fully functional SaaS solution including how to implement the following: SaaS landing page Webhook to subscribe to change events Integrating your SaaS product into the marketplace And more! Presented by: Santhosh Bomma- Senior Software Engineer, Microsoft Register hereView the full article
  14. Description: In this session you will learn about the payouts process lifecycle for the Microsoft Commercial Marketplace, how to view and access payout reporting and what payment processes are supported within Partner Center. Join this session to learn about the payouts process within Azure Marketplace. We will review the following topics: The payouts process lifecycle for the Azure Marketplace How to register and the registration requirements General payout processes from start to finish How to view and access payout reporting Presented by: Priyanka Singh- Senior Program Manager, Microsoft Register hereView the full article
  15. Description: Join us for an insightful webinar designed specifically for ISVs and Cloud Solution Provider (CSP) partners eager to sell together leveraging CSP private offers through the Microsoft commercial marketplace. This session will provide a comprehensive walkthrough of the CSP Marketplace and private offer experience. The ISV to CSP private offers empower ISVs and partners in the CSP program to grow their revenue by creating time-bound customized margins that suit each entity's business needs through the Microsoft commercial marketplace. Expected Outcomes: By the end of this webinar, you will: Gain a comprehensive understanding: learn the ins and outs of the CSP Marketplace and CSP private offers, including how to create, manage and sell. Operationalize your process: discover how to streamline your processes that can help you manage CSP private offers and transactions more efficiently. Expand your market reach: understand how to leverage the marketplace to access a global customer base and increase your visibility. Target Audience: This webinar is ideal for ISVs and CSP partners who are looking to partner with and sell their solutions through the Marketplace commercial marketplace for small-midsized customers. Presented by: Sindy Park- Senior Product Manager, Microsoft commercial marketplace. Mila Flaherty- Senior Product Manager, CSP marketplace, Microsoft Register hereView the full article
  16. Description: Learn how to start with a new SaaS offer in the commercial marketplace; set up the required fields in Partner Center and understand the options and tips to get you started faster! Presented by: Jugo Salsedo Register hereView the full article
  17. Description: Multiparty private offers empower ISVs and channel partners to create personalized offers with custom payouts and sell directly to Microsoft’s enterprise customers through the Microsoft commercial marketplace. As multiparty private offer capabilities expand globally more ISVs are finding success when they enable their channel partners to sell their apps with these custom deals. The most successful ISVs are actively educating their sales teams, enabling their channel partners, and positioning with customers. The multiparty private offers campaign in a box is available within the Partner Marketing Center for all ISVs, and provides a set of customizable templates that any ISV can use to accelerate their sales with multiparty private offers. Alliances, Marketing, and Sales leaders should attend this brief webinar to learn more about how to use this campaign to: Educate internal sales and channel teams on multiparty private offers • Recruit and enable channel partners to sell with multiparty private offers • Promote your marketplace capabilities to your Microsoft contacts • Market your applications to customers through channel partners Presented by: Jason Rook- Director of Product Marketing, Microsoft Register nowView the full article
  18. Description: During this Channel Partner Office Hours, we will cover new marketing and product assets which you can use to help influencers and purchase decision makers better understand Microsoft commercial marketplace. We will also reserve more time for your questions with marketplace experts from Marketplace Engineering. During this session you will learn about: How to use updated customer assets to position your marketplace deals. How to use the Modern Procurement Playbook to better navigate procurement when closing your marketplace deals. Expert Q&A Please note, this office hours will prioritize channel partner questions, please submit your questions in advance when registering. Pre-submitted questions will be discussed during the Q&A portion at the end of the event. Presented by: Jason Rook- Director of Product Marketing, Microsoft Register hereView the full article
  19. I want to change canary dev version to more stable version of windows. I even don't know how I got enrolled in the build insider but now I cannot even change the version since selecting other versions are off and cannot be chosen. I reinstalled my windows but still the same version installs whereas I deleted all of my data and fully reinstalled the windows. please let me know how I can get rid of inside builder program and its unwanted updates. View the full article
  20. Description: In this webinar, learn how to set up and develop the new Azure Container Offer used to deploy containerized solutions as Kubernetes Apps from the Azure Marketplace. Presented by: David Starr- Principal Software Engineer, Microsoft Register nowView the full article
  21. Description: In this session we will review the required technical configurations to make Virtual Machine apps and how to publish virtual machines offers to the Azure marketplace. Looking for additional guidance with Virtual Machines? An Azure technical expert will take you through: A brief overview of what a virtual Machine offer type is. How to publish a Virtual machine offer and integrate the solution from the Azure Portal tool to Partner center. How to setup Tenants. How to create different plans to best suit your customers’ needs How to use Cloud-init within the Azure Portal. There will be a short Q&A following the session around the Virtual Machine offers. Presented by: Neelavarsha Mahesh, Software Engineer, Microsoft Register nowView the full article
  22. Description: This session will show how SaaS accelerator project can help partners go to market quicker by accelerating creating the technical implementation to publish their transactable SaaS offers on Azure Marketplace. Session covers Overview of SaaS offer and technical requirements Overview of SaaS Accelerator code base Deploying SaaS Accelerator - Demo Presented by: Santhosh Bomma- Senior Software Engineer, Microsoft Register nowView the full article
  23. Description: Join us for an informative session on how to accelerate your sales on Azure Marketplace. In this webinar, we'll walk you through practical strategies using the SaaSify platform to streamline your sales process and boost your success. Key takeaways include: Listing and Transacting Automation: Simplify your listing process and automate transactions on Azure Marketplace with SaaSify’s intuitive platform. Creating and Managing Private Offers: Learn how to create personalized offers for specific customers and manage them with ease using SaaSify. Scaling with Advanced Reporting and Analytics: Use powerful reporting tools to monitor performance, track key metrics, and scale effectively. Automating Operations & Co-selling with CRM Integrations: Streamline operations with CRM integrations, automating key tasks and enhancing co-selling opportunities with Azure. This webinar is perfect for ISVs looking to scale their presence and optimize sales on Azure Marketplace. Presented by: Amit Makik- Spektra Systems Register NowView the full article
  24. 1. Security Scenario One of the most common scenarios for Microsoft Graph Data Connect (MGDC) for SharePoint is Information Oversharing. This security scenario focuses on identifying which items are being widely shared within the tenant and understanding how permissions are applied at each level. The MGDC datasets for this scenario are SharePoint Sites and SharePoint Permissions. If you’re not familiar with these datasets, you can find details in the schema definitions at https://aka.ms/SharePointDatasets. To assist you in using these datasets, the team has developed an Information Oversharing Template. Initially published as a template for Azure Synapse, we now have a new Microsoft Fabric template that is simpler and offers more features. The SharePoint Information Oversharing v2 template, based on Microsoft Fabric, is now publicly available. 2. Instructions The template comes with a set of detailed instructions at https://aka.ms/fabricoversharingtemplatesteps. These instructions include: How to install the Microsoft Fabric and Microsoft Graph Data Connect prerequisites How to import the pipeline template from the Microsoft Fabric gallery and set it up How to import the Power BI template and configure the data source settings See below some additional details about the template. 3. Microsoft Fabric Pipeline After you import the pipeline template, it will look like this: Pipeline in Microsoft Fabric The Information Oversharing template for Microsoft Fabric includes a few key improvements: It uses the new UserCount and TotalUserCount properties in the SharePoint Permissions dataset, which means you do not need to pull the SharePoint Groups or the three Microsoft Entra ID Group datasets to calculate the number of users being granted access. This optimization will greatly reduce the cost to get a report of the sites shared with the most users. The new template also uses delta datasets to update the SharePoint Sites and SharePoint Permissions datasets. It keeps track of the last time the datasets were pulled by this pipeline, requesting just what changed since then. As the previous template, this one also flattens the SharePoint Permissions dataset, creating one permission row for each "Shared With" inside the permission. So, if a file is shared with three people, the SharePoint dataset will show one row, but the flattened data stored in Microsoft Fabric will show three rows. You can find details on how to find and deploy the Microsoft Fabric template in the instructions (see item 3). 4. Microsoft Fabric Report The typical result from this solution is a set of Power BI dashboards pulled from the Microsoft Fabric data source. Here is an example: Power BI Sample Dashboard These dashboards serve as examples or starting points and can be modified as necessary for various visualizations of the data within these datasets. The instructions (see item 3) include details on how to find and deploy a few sample Power BI Information Oversharing templates. 5. Conclusion I hope this provides a good overview of the Information Oversharing template for Microsoft Fabric. You can read more about the Microsoft Graph Data Connect for SharePoint at https://aka.ms/SharePointData. There you will find many details, including a list of datasets available, other common scenarios and frequently asked questions. View the full article
  25. In modern cloud architectures, security and network isolation are critical considerations when deploying Azure Functions. Many organizations leverage VNet integration to enhance security by restricting outbound traffic and ensuring controlled access to resources. However, when using Blob Triggered Azure Functions with an Azure Storage Account configured with Private Endpoints and no public access, additional configurations are required to maintain seamless communication. This blog explores the key challenges and best practices for enabling Blob Triggered Azure Functions in a fully private network environment, ensuring secure and reliable execution. Behind the scenes When public access is enabled for an Azure Storage Account, Azure Functions can communicate with it directly over the public internet using the storage account's connection string or managed identity. This allows seamless access to blobs, queues, and tables without requiring any additional networking configurations. But when there is business need to secure network parameter and run services in the scope of VNet there are some additional settings required and that's what we are going to discuss now. Azure Functions can securely communicate with Azure services using private endpoints, which provide private IP addresses within a Virtual Network (VNet) to restrict traffic to the Microsoft backbone. When an Azure Function is integrated with a VNet using VNet Integration, it can access private endpoints within the VNet. The private endpoint assigns a private IP address from the VNet subnet to the associated Azure service (e.g., Azure Storage, Cosmos DB, Key Vault), allowing traffic to bypass public endpoints entirely. Azure Private DNS plays a critical role in resolving the fully qualified domain names (FQDNs) of these services to their corresponding private IP addresses. When the Azure Function attempts to connect to an Azure service, DNS resolution occurs via the linked Private DNS Zone, which contains the private IP mappings for the private endpoint. The Function App, if configured correctly, queries the Azure-provided DNS resolver or a custom DNS server that forwards requests to the Private DNS Zone, ensuring that requests to the service resolve to the private IP instead of the public IP. This setup ensures secure, low-latency communication within the VNet while maintaining strict access controls. Additionally, network security groups (NSGs) and route tables can be applied to further control traffic flow between the Function App and the private endpoints, preventing unintended public exposure. Configuring Storage Connections for Blob Trigger Azure Functions makes sure that workloads are fault tolerant. So, it automatically retries a blob-triggered function up to five times in case of failures. If the function still fails after these attempts, Azure Functions writes the corresponding blob details into a dedicated queue named: webjobs-blobtrigger-poison This ensures that failed blobs are logged for further investigation and processing, preventing them from causing continuous execution failures. The AzureWebJobsStorage connection string is used internally to manage blobs and queues required for the Blob Trigger functionality. However, in case of VNet-integrated Azure Functions, it is not immediately apparent that Azure Queue Storage Private Endpoints must also be configured when using Blob Triggered Functions. However, as outlined above, this configuration is crucial to ensure that the Azure Function can correctly query Azure-provided DNS resolver to find the right Azure storage queue and route poison messages to the designated queue resource, maintaining reliability and seamless failure handling within a secure network environment. Conclusion In conclusion, securing Azure Blob Triggered Functions within a VNet-integrated environment requires careful configuration of private endpoints, DNS resolution, and storage access to ensure seamless communication while maintaining strict security controls. Unlike public access scenarios, where functions can directly interact with storage accounts over the internet, a private network setup demands proper integration of Azure Private DNS and Queue Storage Private Endpoints to facilitate reliable function execution and failure handling. By implementing these best practices, organizations can achieve a fully secure, resilient, and high-performing serverless architecture that adheres to enterprise security and compliance requirements. View the full article
×
×
  • Create New...