-
Posts
5750 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Articles
Downloads
Everything posted by Windows Server
-
I've already make a Ticket January 2024 to Microsoft Support, the Issue is known but not in priority to fix it. If i build a pkcs profil for Android and will set a placeholder for AN CN={{OnPrem_Distinguished_Name}} the result in Certificates are CN="CN=XXXX,OU=XXXX,DC=XXXX". I mean the dubblequots are false implement in programming. If i use the same Placeholder on a iOS PKCS Profil the Certificate are korrekt issued CN=XXXX,OU=XXXX,DC=XXXX View the full article
-
We are excited to bring you our latest weekly spotlight series edition. This week, we are focusing on the frequently asked questions about ‘Velocities’ in DFP. Check out all the Q&A details below. Your input is invaluable, so please feel free to reply with any questions or for more information in the Fraud Protection Tech Community. Best regards, DFP Product Team 1. What are velocities in Microsoft Dynamics 365 Fraud Protection? While Lists, ML scores, and other payload attributes give you insight into the current event that is being processed, we also have velocities that will help you consider past behavior as well. Velocities give insights into historical patterns of an individual or entity. It helps answer questions like 'how many attempted transactions coming from the same emails? Or how many unique users or IP addresses? Or how many login attempts happened in a certain amount of time such as 5 or 10 minutes? Perhaps, I want to block anyone who tries to login into the web site more than 3 times in under ten minutes then I can do that. Velocities help identify patterns of events that occur over a period of time, which can be monitored to identify potentially fraudulent activity. By defining velocities, you can set thresholds to flag activities as suspicious when they exceed certain limits. References: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn 2. How would someone use velocities in fraud protection? Velocities can be used in various ways, such as: Setting Rules: Define rules using velocities to automatically flag transactions that exceed predefined thresholds. Monitoring Patterns: Keep an eye on the frequency and volume of events associated with user accounts, payment instruments, or IP addresses. Investigating Anomalies: Use velocity data to investigate and understand unusual patterns that could indicate fraudulent behavior. References: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn 3. Can you provide examples of velocities? Yes, here are a few examples: Total Spending Per User: This velocity tracks the sum of money spent by each user over a specified time frame. IP Address Usage: This velocity monitors the number of times an IP address is used to create new accounts. Device ID Checks: This velocity observes how often a particular device ID is used in transactions. References: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn 4. Are there any system-defined velocities? Yes, Dynamics 365 Fraud Protection creates several system-defined velocities per environment, such as email, payment instrument, IP, and device ID velocities. These can be customized to fit the specific needs of your business. References: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn 5. Why isn't my velocity rule being hit by some transactions even though the conditions are met? Microsoft D365 Fraud Protection is a distributed system. In a distributed system, events can happen concurrently and there is no sequence/order between them if they arrive at the same time. (For transactions that come in at the same time, DFP does not block one transaction for the other.) From a velocity standpoint, which would mean that multiple transactions sent at the same time can be considered the “first one” and in these cases can influence the aggregate count of the velocity. One potential way to mitigate this on the customer side would be for you to sequentially execute your transactions one by one (i.e., only send the next transaction after the previous one is done being processed), however this may not be a desired behavior as it would result in longer latencies for those transactions that get executed later. References: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn 6. Do you recommend using device ID to set up a velocity rule? In Microsoft Dynamics 365 Fraud Protection, setting up velocity rules using device ID can be an effective method to identify suspicious activity patterns. For instance, velocity checks can help you spot patterns such as a single credit card quickly placing many orders from a single IP address or device, which might indicate potential fraud. You can define velocities using the SELECT, FROM, WHEN, and GROUPBY keywords, and device ID can be a useful attribute to GROUPBY in your velocity definition. It is important to tailor the velocity rules to the specific patterns and behaviors that are indicative of fraud in your business context. The device ID can be a valuable attribute to monitoring, especially if device-related fraud is a concern for your organization. Always ensure that the field you want to observe for velocity is part of the API call and consider the specific conditions and thresholds that are relevant to your business when defining these rules. References: Perform velocity checks - Dynamics 365 Fraud Protection Manage rules - Dynamics 365 Fraud Protection | Microsoft Learn 7. In the recommended rules, there are velocity-based rules. How did you set the threshold for those velocity-based rules? The threshold for velocity-based rules in Microsoft Dynamics 365 Fraud Protection is typically set based on historical data analysis and the specific fraud patterns observed within your organization. It involves identifying the normal transaction velocity for legitimate users and then setting thresholds that would flag transactions as suspicious when they exceed this normal velocity. It is important to continuously monitor and adjust these thresholds as fraud patterns evolve and as you gather more data on user behavior. Collaboration with your fraud management team and using machine learning models can also help in dynamically adjusting these thresholds to improve fraud detection accuracy. 8. Where can I find more information on setting up velocities? You can find detailed instructions and examples on the official Microsoft documentation site for Dynamics 365 Fraud Protection here: Perform velocity checks - Dynamics 365 Fraud Protection | Microsoft Learn View the full article
-
Join us on Thursday, December 19th at 8am PT as Ricardo Duncan, Product Marketing Manager, Biz Applications and Marcella Desloge, Product Marketing Manager, Biz Applications present 'Power Platform Total Economic Impact and ROI Summary'. In this session, discover the transformative potential of Microsoft Power Platform through Forrester's Total Economic Impact™ (TEI) study, which reveals a 224% ROI and significant cost savings over three years. Learn how the platform drives productivity, operational efficiency, and revenue growth for your customers. We hope you'll join us! Call to Action: Click on the link to save the calendar invite: https://aka.ms/TechTalksInvite View past recordings (sign in required): https://aka.ms/TechTalksRecording View the full article
-
Hi folks. I'm just getting started with ADS as we need to move an existing on-prem DB to Azure. I 'm trying to use ADS to set up the target DB. I installed the SQL Database Projects extension and created a project from our on-prem DB, but every time I try to build it I get Error MSB4020: stdout: C:\Program Files\dotnet\sdk\7.0.403\Sdks\Microsoft.NET.Sdk\targets\Microsoft.NET.Sdk.targets(1199,3): error MSB4020: The value "" of the "Project" attribute in element <Import> is invalid. [c:\testing\WT pre-migration\WT pre-migration.sqlproj] Here's the proj file: <?xml version="1.0" encoding="utf-8"?> <Project DefaultTargets="Build"> <Sdk Name="Microsoft.Build.Sql" Version="0.2.0-preview" /> <PropertyGroup> <Name>WT pre-migration</Name> <ProjectGuid>{AE8E25C1-F1D6-4447-A008-2C24C82B51FA}</ProjectGuid> <DSP>Microsoft.Data.Tools.Schema.Sql.SqlAzureV12DatabaseSchemaProvider</DSP> <ModelCollation>1033, CI</ModelCollation> </PropertyGroup> <Target Name="BeforeBuild"> <Delete Files="$(BaseIntermediateOutputPath)\project.assets.json" /> </Target> </Project> View the full article
-
In November we announced the release of SQL Server Management Studio (SSMS) 21 Preview 1, and last week we released Preview 2. If you’re wondering if you missed a blog post, you didn’t. We like to keep you guessing! In truth – we don’t post for every release, and with update notifications that are now available in SSMS 21, we're not certain we need to write a blog post every time there’s an update. Update notifications If you haven’t downloaded SSMS 21 yet, then you probably haven’t seen this notification that appears when there’s an update: *Note: That says Visual Studio 2022 update and we want it to say SSMS 21 update...we have a work item for that. If you select View details, the Visual Studio Installer launches and shares details about the update: You can see that I have Preview 2.0 installed, and the update version is 2.1. There’s also a link to the release notes, in case you want to review those before you install the update. Updating SSMS Why are we going into so much detail about an update? Stick with us. First, this is a whole new experience for users of SSMS. While it’s something folks have been asking for, that doesn’t mean it’s comfortable for everyone. Second, this is a great opportunity to talk about what changes when SSMS updates. If you’re reading the release notes (as we know you all take the time to do), then you won’t be surprised to learn that updates to SSMS occur because of changes in SSMS, or Visual Studio, or both. One of the key components of SSMS 21 is Visual Studio 2022. As a Visual Studio-based solution, SSMS leverages the same architecture and file dependencies. This integration allows SSMS to provide a better overall user experience. It also means that any updates or changes to Visual Studio files necessitate corresponding updates to SSMS, even if there are no direct changes to SSMS files themselves. Every time we release SSMS, the release notes will document if there is an update to Visual Studio. If you have the same version of Visual Studio installed, you’ll see parity. For example, SSMS 21 Preview 1 released with Visual Studio 17.13 Preview 1. SSMS 21 Preview 2 released with Visual Studio 17.13 Preview 2. For both of those releases, there were changes to both Visual Studio files and SSMS files. SSMS 21 Preview 2.1 Today, we released SSMS 21 Preview 2.1 and Visual Studio released 17.13 Preview 2.1. There are no other changes for SSMS – nothing the SSMS team updated for this release. The interconnected nature of SSMS and Visual Studio means that maintaining compatibility and performance requires the latest Visual Studio updates. When Visual Studio undergoes changes - whether they are bug fixes, performance enhancements, or feature additions - SSMS must also be updated to ensure it continues to function seamlessly. This proactive approach helps prevent potential issues that could arise from mismatched versions or dependencies, ultimately safeguarding the stability and reliability of SSMS. We learn something new every day As we’re still in preview, this also helped us discover a version issue we need to resolve. It’s documented in the known issues, but in case you want to help others understand: the SSMS version (21.0.73) did not change between Preview 2 and Preview 2.1. Within the Visual Studio Installer, the version shows as you would expect, but in Help > About, it still says Preview 2.0. Apologies in advance for any confusion, it will be fixed. If you have Preview 2 installed and you don’t see a notification that you can update to Preview 2.1, you can always to Help > Check for Updates, or launch the Visual Studio installer which automatically checks for updates. Final thoughts We encourage folks to apply updates when they are available, and we hope you all are enjoying SSMS 21. Thank you to those who have submitted feedback on the site! Your feedback is invaluable as we work through previews and continue to improve SSMS. Thank you for your continued support, and we hope everyone has a great holiday. See you in 2025! View the full article
-
Security team have been often receiving alert that during the installation of Symantec Encryption Desktop, Windows is using bcdedit.exec to modify the boot configuration, where its disabling windows default system recovery. It might be an expected behavior to ensure no one can bypass the encryption at boot time and It could be a Defense Mechanism. As we're receiving lots of alerts on this, we want to get to the root cause and ensure this is an expected behavior. That way we can have it documented and fine tune our detection. Does any one know if it it would interact with system boot configuration and any mention of bcdedit tasks being used during installation. Command Line: "cmd.exe" /c schtasks.exe /Create /RU %USERNAME% /SC DAILY /TN runBCDEDIT /RL HIGHEST /TR "bcdedit.exe /set recoveryenabled No " & schtasks.exe /run /TN runBCDEDIT & schtasks.exe /Delete /TN runBCDEDIT /F & schtasks.exe /Delete /TN "runBCDEDIT" /F View the full article
-
Optimized locking is a Database Engine feature introduced in 2023 that drastically reduces lock memory, and the number of locks required for concurrent writes. This is enabled by default for Azure SQL Database. Although the document (Optimized Locking - SQL Server | Microsoft Learn) introduces and explains the idea of optimized locking clearly, it might still be a bit vague and hard to visualize. To better understand how optimized locking works, I conducted a few labs and collected Extended Events to observe the lock acquisition sequence. Here, I have included two demonstrations of the lock acquisition sequence to give you a glimpse of how Optimized Locking works. However, I strongly recommend reading the public document (Optimized Locking - SQL Server | Microsoft Learn) to gain a basic understanding of Optimized Locking before going through the demonstrations. Demonstration #1 Create table and populate data: CREATE TABLE t2 ( a INT NOT NULL, b INT NULL ); INSERT INTO t2 VALUES (1, 10), (2, 20), (3, 30); GO Create the first session that run the update query --session 1 BEGIN TRANSACTION foo; UPDATE t2 SET b = b + 10 WHERE a = 1; From the xevent logs, you will see the UPDATE query acquires 4 locks IX lock on OBJECT (table) X lock on PAGE X lock on RID of the row to be modified X lock on TID (1088080) Through the DMV, you will see that it only holds the XACT lock (the other three locks are released once the row has been updated, even though it has not been committed). SELECT * FROM sys.dm_tran_locks WHERE request_session_id = @@SPID AND resource_type IN ('PAGE','RID','KEY','XACT'); If now the second session would like to update the same row where a = 1 --session 2: BEGIN TRANSACTION bar; UPDATE t2 -- WITH (ROWLOCK) SET b = b + 20 WHERE a = 1; You will see that it's waiting for 'XACT_WITH_RESOURCE_TO_MODIFY' because it's trying to place an S lock on session 1's TID but has to wait because session 1 owns the X lock on its TID. Once the transaction in session 1 has committed, you can see: The transaction in session 2 can place the S lock on the session 1's TID (1088080) The transaction in session 2 stamps the to-be-modified row with its TID (1088086) and further places the X lock on it. Then the transaction in session 2 has committed the transaction (release X lock on TID), and session 1 starts a new transaction to update the same row. (a=1) --session 1 BEGIN TRANSACTION foo; UPDATE t2 SET b = b + 10 WHERE a = 1; From the xevent log, you can tell that it does not need to place an S lock on session 2's previous transaction's TID (1088086); instead, it places an X lock on its new TID (1088087)." The above lock acquisition sequence is illustrated below: ===================================================================================================== Demonstration #2 Use the same table as demo #1. Create the first session that run the update query --session 1 BEGIN TRANSACTION foo; UPDATE t2 SET b = b + 10 WHERE a = 1; From the xevent logs, you will see the UPDATE query in session 1 acquires 4 locks IX lock on OBJECT (table) X lock on PAGE X lock on RID of the row to be modified. (51960, 1) X lock on TID (1088163) Create a second session that runs the UPDATE query on another row; you will see that it won't be blocked with Optimized Locking. --session 2: BEGIN TRANSACTION bar; UPDATE t2 SET b = b + 10 WHERE a = 2; From the xevent logs, you will see the UPDATE query in session 2 acquires 4 locks IX lock on OBJECT (table) X lock on PAGE X lock on RID of the row to be modified. (it's a different row than a=1, you can find the resource id different: 51960, 65537) X lock on TID (1088164) If now session 2 would like to update the same row where a = 1 as session 1. --session 2: UPDATE t2 SET b = b + 20 WHERE a = 1; You will see that it's waiting for 'XACT_WITH_RESOURCE_TO_MODIFY' because it's trying to place an S lock on session 1's TID (1088163) but has to wait since session 1 owns the X lock on it. Once the transaction in session 1 commits, the transaction in session 2 can place S lock on TID (1088163) The transaction in session 2 stamps the to-be-modified row with its TID (1088084) but no need to further places the X lock on it because it already has X lock on its current TID (1088164) and hasn’t released yet (transaction uncommitted). The above lock acquisition sequence is illustrated below: (End) View the full article
-
The third generation of OneDrive (OD3) has been rolled out, featuring a simplified and modernized design. The updated navigation pane and colored folders (now also available in your Windows Explorer!) enhance the user experience. There is now a singular new/create experience, and the OneDrive home offers more than just recent files. AI-powered features provide personalized suggestions for you. Enhanced Navigation and Filtering The new line filter pills allow users to filter by name or person. Favoriting documents is something I really love to do, and users can jump from OneDrive right into a comment. The people view prioritizes important contacts at the top, and collaborators can be pinned for easy access. Meetings and Collaboration The meetings view displays upcoming meetings and past meetings along with associated files. Smarter search and shared folders are now 400 milliseconds faster, providing optimized results. Filters and advanced filtering options have been improved. Integration with Microsoft 365 OneDrive powers Microsoft 365 collaboration with presence in Office files and simplified sharing. There is broader support for expiration dates on more sharing links, that is now back from being gone for some time. Project Nucleus enables OneDrive Web to function offline (so you can directly access files even when offline), with common actions now three times faster. The platform also boasts faster launch times and fewer interruptions. Microsoft OneDrive is the underlying solution that powers the collaborative files experiences across Microsoft 365. Meeting Recap and Future Enhancements The meetings recap feature allows users to summarize recordings and catch up on important files in just 10 seconds. QA on meeting recordings will be available, and future updates will include one-click access to Copilot and agents in OneDrive. Security enhancements include a restricted content discoverability policy, ensuring that certain files do not appear in Copilot results. Go to the Meetings view in OneDrive, find the right meeting, and in just one-click, ask Copilot to recap the meeting. Rolling out and upcoming Features Several new features are rolling out, including the new home experience in File Explorer, colored folders in Explorer, and a modern cloud picker. Document libraries will receive a makeover, and Copilot in OneDrive will offer a summarize feature before opening documents. Users can compare documents, such as resumes or bank statements, by creating a table that compares the files. There will also be a QA feature for files without opening them and the ability to create FAQs from documents. I am looking forward to seeing Agents in OneDrive, the convert to PPT/Word feature, catch up and meeting recap in the next year! Beyond ESPC24, continue the learning... The following OneDrive event recording takes you a deeper into "OneDrive's latest AI innovations at work and home". Watch it now: Learn more about what the OneDrive team announced on October 8th, 2024. Cheers and enjoy all your new OneDrive experiences, Marijn Somers View the full article
-
Workplace AI will soon be as common as word processors and spreadsheets. Tangible AI benefits like better decision making, increased productivity, and better security will soon become must-haves for every business. Early movers have an opportunity to gain a competitive advantage. But doing so requires a strategic approach to AI adoption that takes advantage of technological advancements early—such as laptops and 2-in-1s with breakthrough AI capabilities. These devices are now easy for any business to obtain in the form of AI PCs from Microsoft Surface. Because they contain a new kind of processor called an NPU, they can run AI experiences directly on the device. Just as CPU and GPU work together to run business applications, the NPU adds power-efficient AI processing for new and potentially game-changing experiences that complement those delivered from the cloud. In a recent Microsoft webinar with experts from Forrester and Intel, leaders discussed how a thoughtful AI device strategy fuels operational success and positions organizations for sustained growth. In this blog post, we’ll examine a few key areas of AI device strategy. For more, watch the full webinar here: How device choice impacts your AI adoption strategy Focusing on high-impact roles An effective AI device strategy requires organizations to identify roles that gain the most value from AI capabilities. Data-centric functions—such as developers, analysts, and creative teams—depend on high-speed data processing, and AI-ready devices help these employees manage complex workflows, automate repetitive tasks, and visualize data-driven insights in real time. Choosing AI-enabled endpoints is not just about the NPU. High-resolution displays and optimized screen ratios, for example, support high-impact roles by providing ample workspace for AI-assisted analysis, modeling, and design work. Starting with on-device AI for these functions helps drive rapid value and motivates other teams to see the potential in AI-powered workflows. The phased rollout of AI devices builds a foundation for broader AI integration. Data governance remains central to technology’s advantage Data privacy and security enable confident adoption of AI tools. One benefit of devices with NPUs is that they allow AI to be used in scenarios where sending data to the cloud is not feasible. It’s also important to consider the general security posture enabled by a device. Hardware-based security features such as TPM 2.0 and biometric authentication help protect device integrity, supporting AI usage within a secure framework. With built-in protections that include hardware encryption, secure user authentication options, and advanced firmware defenses, AI-enabled devices create a trusted environment that upholds privacy standards and aligns with organizational compliance requirements. Choosing devices like Microsoft Surface that fit seamlessly into a wide range of device management setups supports faster adoption and reduces risk. Balancing advanced AI features with stable performance AI-enabled devices bring unique processing capabilities that don’t compromise the reliability of core functions. Specialized processors dedicated to AI workloads manage intensive tasks without drawing from the main CPU, preserving battery life and maintaining consistent performance. This balanced approach supports both advanced AI capabilities and essential day-to-day operations, providing employees with stable, responsive tools that adapt to their needs. AI-driven interactions, like responsive touch, intuitive inking, and enhanced image processing, further improve user experience. High-quality cameras and intelligent audio capture, for instance, optimize interactions in virtual meetings and collaboration, making these devices versatile and effective across different work scenarios. By focusing on the user experience, organizations empower teams to take full advantage of technology without a steep learning curve. Aligning IT and business goals for an effective AI strategy A strong AI device strategy brings together IT priorities and broader business objectives. While IT teams focus on security, manageability, and integration with existing infrastructure, business leaders aim to increase efficiency and support innovation. Aligning these goals enables a smooth AI adoption process, allowing organizations to leverage AI’s capabilities while meeting essential technical requirements. Strategically investing in devices with integrated security and manageability features, such as remote management of device settings and firmware updates, gives IT greater control over deployment and maintenance. This integrated approach allows organizations to keep their AI device strategy aligned with long-term goals, reducing the need for costly upgrades and enabling teams to work within a secure, adaptable tech environment. Supporting employee workflows with AI tools AI-enabled devices enhance productivity by automating repetitive tasks and giving employees more time to focus on high-value work. Tools like intelligent personal assistants and voice-driven commands support employees by streamlining tasks that would otherwise require manual effort. Enhanced typing experiences and personalized touch interactions improve user engagement, making AI tools easier to integrate into everyday workflows. With customizable features and inclusive design options, AI-enabled devices make advanced technology accessible to all team members, increasing satisfaction and reducing turnover. By enabling employees to focus on higher-level work, organizations can create an environment that supports meaningful productivity and helps retain talent. Proactive IT management with AI-driven insights Beyond the device, AI also offers new capabilities for device management, allowing IT teams to proactively monitor and resolve potential issues. By analyzing device usage patterns, AI can detect anomalies early, enabling IT to address risks before they impact employees. This shift from reactive to proactive management improves device reliability and reduces downtime, freeing IT resources to focus on broader strategic initiatives. Integrated AI security tools also improve protection, identifying threats as they emerge and securing devices with minimal manual intervention. With insights derived from AI-driven monitoring, IT teams can maintain secure, reliable systems that enhance overall operational stability. Crafting a forward-looking AI device strategy A structured AI device strategy prioritizes both immediate and long-term ROI by examining where new technology can have the greatest impact while also enhancing existing capabilities. By acting early, organizations position themselves to gain speed with AI and adopt the latest advancements as they are released. Whether you’re beginning with AI or looking to expand its role, a well-designed AI device strategy keeps your organization prepared for growth. To explore how AI-enabled devices can drive your team’s success, gain insights from experts at Forrester and Intel by watching the webinar: How device choice impacts your AI adoption strategy. View the full article
-
Hello, anyone knows here of this big big issue with 2025 DCs? since i use them, we have the Problem that clients(win11) are loosing domain trust. Its so bad! Does it give a fix for it? Or waiting for a Patch? View the full article
-
Dear Community, As we are planning our following the end of Legacy Silver & Gold Benefits, we would like your significant help to reply the below questions: 1. Will we be able to have both the Modern Work SMB Designation that we are eligible for & also buy the Partner Success Core pack? 2. Does Microsoft 365 E5 that is included in the Solutions Partner Designation Modern Work SMB include Teams? 3. Does Viva Suite that is included in the Solutions Partner Designation Modern Work SMB include Viva Goals? 4. Could you please confirm that we are able to combine the new partner benefits packages, meaning that we are able to buy one unit of each package (e.g., 1x Partner Launch & 1x Partner Success Core). Thank you in advance for your prompt actions to help us conclude with our FY25 budget! Warm regards, Nick View the full article
-
We are trying to get Windows Server 2022 up with our licensing we recently bought. We have to go through software assurance, because they are all VMs running on non-datacenter servers. We have tried to look up keys on Microsoft Business Center but that has pushed us towards a phone number that we have literally sat for 3 hours on. Our resellers tell us they can't do anything, we are stuck on trial mode for servers that need to go into production and have no way forward for ensuring that we are fully licensed.Has anyone else had this experience and can offer some guidance or resolutions they haView the full article
-
With a world of fast-growing digital native software companies building in the cloud and with Azure Marketplace – it’s likely to be the customer that discovers the enterprise application of tomorrow, before they have a conversation with an ISV or a partner. Positioning yourself as the “SaaS ecosystem orchestrator” and managing multiparty private offers when it comes to enterprise contracting is therefore a must do activity for the next generation Microsoft channel partner. For channel organizations, you could start to see the Microsoft Azure Marketplace as a platform for modern partnering. At Microsoft Ignite Live in Chicago, I had the pleasure of presenting a comprehensive practice builder for Azure Marketplace multiparty private offers (MPO) in your channel business. This session was aimed at helping partners leverage the full potential of the Azure Marketplace to drive growth and innovation. Following the marketplace Summit in London, and the session at Ignite, the simple circular presentation of each of the stages of the Practice Builder became affectionately known as the “MPO donuts”! Below, I’ll walk you through the key steps and insights shared during the presentation. The Value of Multiparty Private Offers (MPO) Multiparty Private Offers (MPO) provide significant benefits to all parties involved: Channel Partners: MPOs allow channel partners to collaborate with ISVs and to create tailored solutions for customers. This collaboration enhances the value proposition, enables partners to tap into pre-committed cloud budgets, and drives significant business growth Customers: Customers benefit from purchasing solutions through their trusted channel partners, but with the Microsoft Commercial Marketplace as the mechanism of delivery. Together, the marketplace and the partner can deliver the perfect balance of agility and innovation with software, with good SaaS governance. This approach simplifies procurement, optimizes costs, and ensures that customers receive comprehensive support and services from their channel ecosystem ISVs: For ISVs, MPOs offer an opportunity to modernize their sales message and reach new markets. By delivering solutions through the Azure Marketplace and a Microsoft Channel partner, ISVs benefit from broader and better adoption of their technology, break out of traditional silos and access a better engaged customer base The Microsoft Marketplace “Channel Practice Builder” (MCPB) is a simple 4 stage methodology to help you build and grow your Azure Marketplace resell capability. In each of the 4 stages there are a number of requirements, with an action and recommendation. You can download and see all the individual stages of the full Channel Practice Builder at aka.ms/UKMPO. Here I’ll walk you through the 4 stages at a high level. Step 1: Foundation The first step in enabling MPOs is to establish a strong foundation. This involves understanding the Azure Marketplace ecosystem and the business process change requirements across various roles in your organization. It’s crucial to align your business strategy with the capabilities and opportunities provided by the marketplace. Understand the ecosystem: Familiarize yourself with the Azure Marketplace and the different types of private offers available. This knowledge will help you identify the best opportunities for your business Align business strategy: Ensure that your business strategy aligns with the capabilities of the Azure Marketplace, and make sure the process has an initial owner and executive sponsor. This alignment will enable you to leverage the marketplace effectively and drive growth Step 2: Enablement Next, focus on enablement. This step involves equipping your team with the necessary skills and knowledge to navigate the Azure Marketplace and starts to build out the cross functional operational best practice. Training sessions, workshops, and leveraging Microsoft’s extensive resources can be immensely beneficial. Ensure that your team is well-versed in creating and managing private offers. Training and workshops: Conduct training sessions and workshops to equip your team with the skills needed to create and manage MPOs. Microsoft provides a wealth of resources to support this training Best practices: A number of key steps to success are included in the enablement section, including building a “single source of truth”, ensuring your quote to private offer process is defined, and selecting Microsoft Marketplace first ISVs to partner with to drive this new seller motion. Step 3: Execution Execution is where the rubber meets the road. Here, you’ll start collaborating with ISV partners to build Go To Market (GTM) activity to drive the net new opportunity. This involves clear communication, defining roles and responsibilities, and ensuring that all parties are aligned with the customer’s needs. Utilize the tools and support provided by Microsoft to streamline this process. Collaborate with partners: Build your campaigns (and use the resources such as the Marketplace MPO “Campaign in a box” https://aka.ms/PartnerMPOCampaign), map your high propensity accounts, and look to agree steps for success across the business and those of your aligned ISV partners Utilize Microsoft tools: Make use of the tools and support provided by Microsoft to streamline the execution process. This includes leveraging the Azure Marketplace platform to manage transactions and offers Step 4: Growth Finally, focus on growth. Once your MPOs are up and running, continuously monitor and optimize the process to evolve your business practices and modernize your SaaS resell capability. Gather feedback from customers and partners to refine your offerings. Leverage analytics and insights to identify new opportunities and scale your business. Monitor and optimize: Continuously monitor the performance of your MPO sales and ops processes and make necessary adjustments to optimize them Gather feedback: Collect feedback from customers and partners to refine your offerings and ensure they meet the needs of the market Build dedicated resource teams: At this stage the most successful partners enhance the trust of their internal teams by having a dedicated Marketplace Sales lead, operational lead and even an ISV/Vendor alliances lead. Key Takeaways Collaboration is key: Building a successful channel practice to maximize the opportunity with cloud marketplaces relies on strong collaboration and clearly defined goals both internally and externally with your ISV partners. Establish clear communication channels and foster a collaborative culture. Leverage Microsoft resources: Microsoft provides a wealth of resources to support partners in creating and managing MPOs. Including Microsoft multiparty private offers and the “Mastering the Marketplace” assets at Private offers in Partner Center - Mastering the Marketplace Make full use of these tools to enhance your capabilities. Focus on customer needs: Always keep the customer at the center of your strategy. Customers are modernization their application stacks and the Microsoft ecosystem (product and partner) is at the very heart of this. Channel partners have the opportunity to maximize their customer value by embracing a Microsoft Commercial Marketplace business strategy Conclusion Enabling Azure Marketplace multiparty private offers in your channel business can unlock significant growth opportunities. By following these steps and leveraging the resources available, you can create a compelling and highly efficient modern SaaS resell practice that drives customer success and business growth. You can view the full MPO Channel Adoption framework at aka.ms/UKMPO View the full article
-
By: Jason Sandys – Principal Product Manager | Microsoft Intune Cloud-native is Microsoft’s goal for all commercial Windows endpoints. By definition, a cloud-native Windows endpoint is joined to Microsoft Entra ID and enrolled in Microsoft Intune. It represents and involves a clean break from on-premises related systems, limitations, and dependencies for device identity and management. This clean break from on-premises dependencies might align with larger organizational goals to reduce or eliminate on-premises infrastructure but doesn’t prevent users from accessing or using existing on-premises resources like file shares, printers, or applications. Cloud-native for Windows endpoints is a large change in thinking for most organizations and thus poses an initial challenge of how to even begin on this journey. This article provides you with guidance on how to begin and how to embrace this new model. For additional guidance that includes a higher-level discussion of what to do with existing endpoints, see: Best practices in moving to cloud native endpoint management | Microsoft 365 Blog to learn more. Proof of concept The first step is to begin with a proof of concept (POC). For any new technology, methodology, or solution, POCs offer numerous advantages. Specifically, they enable you to evaluate the new “thing” with minimal risk while building your skills and gaining stakeholder buy-in. Because the exact end state of Windows endpoints is highly variable among organizations and even within an organization, a POC for cloud-native Windows enables you to take an iterative approach for defining and deploying these endpoints. This iterative approach involves smaller waves of users and endpoints within your organization. It’s ultimately up to you to define which endpoints or users should be in each wave, but you should align this to your endpoint lifecycle and refresh plan. Aligning to your endpoint lifecycle allows you to minimize impact to your users by consolidating the delivery of new endpoints with the changeover from hybrid join to Microsoft Entra join, which requires a Windows reset or fresh Windows instance. Additional significant criteria to consider for which users and endpoints to include in each wave are the organizational user personas and endpoint roles. An iterative POC enables you to break work effort and challenges into more manageable pieces and address them individually or sequentially. This is important since some (often many) challenges related to adopting cloud-native Windows endpoints are isolated or not applicable to all endpoints or users in the organization. Some challenges may even remain unknown until they arise, and the only way to learn about them is by conducting actual production testing and evaluation. You don’t need to address or solve every challenge to successfully begin your journey to cloud-native Windows endpoints. An easy example for this is users that exclusively use SaaS applications: these users’ endpoints already have limited (if any) true on-premises service or application dependencies, and they likely face few, if any, challenges in moving to cloud-native Windows endpoints. Initial cloud-native Windows configuration There are some common activities that need to occur before you deploy your first cloud-native Windows endpoints. Keep in mind that this list is simply the steps to begin the iterative process, it’s not all-inclusive or representative of the final state. For a detailed walkthrough on configuring these items (and more), see the following detailed tutorial: Get started with cloud-native Windows endpoints. Identify the user personas and endpoint types within your organization. These typically vary among organizations, so there’s no standard template to follow. However, you should align your POC to these personas and endpoint types to limit each wave’s impact and scope of necessary change. Configure your baseline policies. Implement a minimum viable set of policies within Intune to deploy to all endpoints. Base these policies on your organizational requirements rather than what has been previously implemented in group policy (or elsewhere). We strongly suggest starting as cleanly as possible with this activity and initially including only what is necessary to meet the security requirements of your organization. Configure Windows Autopatch. Keeping Windows up to date is critical, and Windows Autopatch offers the best path to doing this (whether a Windows endpoint is cloud-native or not). Configure Windows applications. As with policies, this should be a minimal set of applications to deploy to your POC endpoints and can include Win32 based and Microsoft Store based applications. Configure Windows Autopilot. Windows Autopilot enables quick and seamless Windows provisioning without the overhead of classic on-premises OS deployment methods. With Windows Autopilot, the provisioning process for cloud-native Windows endpoints is quick and easy. Configure Delivery Optimization. Windows uses Delivery Optimization for downloading most items from the cloud. By default, Delivery Optimization leverages peers to cache and download content locally. Edit the default configuration to define which managed endpoints are peers or to disable peer content sharing. Enable Windows Hello for Business and enforce multi-factor authentication (MFA) using Conditional Access. Enable Cloud Kerberos Trust for Windows Hello for Business to enable seamless access to on-premises resources. These items significantly increase your organization’s security posture and place your organization well on the Zero Trust path. As the iterative POC process evolves to include more user personas and endpoint roles, you can add more functional policy requirements and applications. This will involve some discovery as you learn about the actual needs of these various personas and roles. Since you aren’t targeting everything from day one, you don’t need to have all requirements defined up front or solutions for every potential issue. Additional suggestions, tips, and guidance Don’t assume something does or doesn’t work on cloud-native Windows endpoints. The POC process enables you to iteratively test and evaluate applications, services, resources, and everything else in your environment – most of which isn’t typically documented. It might simply be part of the tacit or tribal knowledge within your organization. In general, you’ll find that nearly everything works just as it did before Windows cloud-native. Document everything. As you implement, document the “what” as well as the “why” for everything you configure. This allows you and your colleagues to come back at any time and understand or refresh your memory for your cloud-native Windows implementation, as well as many other things in the environment. Microsoft doesn’t expect organizations to rapidly convert their entire estate of Windows endpoints to cloud-native. Instead, we recommend taking it slow, being deliberate, and using the iterative approach outlined above by aligning to your hardware refresh cycle to minimize impact on users. This also provides you with time to prove the solution, address gaps, and overcome challenges as you discover them without disrupting productivity. Use the built-in Conditional Access policy templates to quickly get started with MFA and other Conditional Access capabilities. The templates enable you to implement Conditional Access policies that align with our recommendations without experimentation. Accessing on-premises resources including file shares from a cloud-native Windows endpoint works with little to no configuration. Refer to the documentation for more details: How SSO to on-premises resources works on Microsoft Entra joined devices. Call to action Begin exploring your cloud-native Windows POC today. Taking this first step now will allow your organization to start reaping the benefits of enhanced security, streamlined management, and improved user experience sooner. Every organization is unique, so there’s no blueprint for comprehensively implementing cloud-native Windows. However, you don’t need a comprehensive blueprint to be successful, you just need to begin and slowly expand adoption throughout your organization when and where it makes sense. The guidance provided above along with the getting started tutorial should give you the information, tools, and confidence to move forward with decoupling your endpoints and users from your on-premises anchors and fully embrace cloud-native Windows. For a more detailed and in-depth discussion on adopting cloud-native Windows, including planning and execution, see Learn more about cloud-native endpoints. If you have any questions, leave a comment below or reach out to us on X @IntuneSuppTeam. Additional Blogs 3 benefits of going cloud native | Microsoft 365 Blog How to achieve cloud-native endpoint management with Microsoft Intune | Microsoft 365 Blog Myths and misconceptions: Windows 11 and cloud native | Windows IT Pro Blog (microsoft.com) View the full article
-
Hi guys, I need some help.I need to extract data from the project to connect to Power BI. I'm currently using the feature in the database model by going to Report>Visual Report>Save Data Select: Task SummaryField Picker (for custom fields) Save Database However, the chronogram is too large and I'm getting an error on this screen asking me to reduce/separate the project. I wanted to do this export in VBA, I even tried using the VBA code below, but I couldn't find a parameter to export the custom fields. Sub Macro1() VisualReportsSaveDatabase strNamePath:="C:\Users\tulio.oliveira\Downloads\Chronogram_Consolidated_V5.mdb", PjVisualReportsDataLevel:=pjLevelWeeks End Sub View the full article
-
I'm wanting to setup a dashboard showinh test results from different sites. My vision is that each site would have their own workbook that they input data on, but every site would be inputting in a table that's of the same format. The Dashboard would pull info from each of these sheets and would show a chart/graph for each site displaying their pass/fail rate with a slicer/timeline to select the month that would control all of the charts/graphs. All the workbooks will be held in a central sharepoint. What would be the best way to set this up? View the full article
-
I need help restoring a big worksheet that a old employee started but it looks like it needs some updating. There are several sheets using roll up sheets and template in order to pull information into a main worksheet that compares data. Data can be selected by property and year. When this was first created there was only one year of actual data. Now there is 3 years and I do not know how to update this so the main worksheet pulls the correct data. Main Sheet Here is a picture of some of the tabs the data is pulled from View the full article
-
I have been tasked with implementing Azure ADConnect for my company. We currently have 2 locally virtualized domain controllers and are already utilizing Office365 for mail. What would be the easiest way to implement ADConnect while having the least amount of downtime/user interruptions. View the full article
-
Optimizing Inference Performance for “On-Prem” LLMs
Windows Server posted a topic in Windows Servers
Introduction In a previous blog we addressed how Data Scientists and Data Engineers on technology teams can achieve effective model monitoring of LLMaaS and gain control over their LLM needs, through RAG and Wallaroo LLM ListenersTM , to mitigate hallucinations and bias to generate accurate reliable outputs. AI technology teams can extend control over their LLMs from model governance to other LLMOps aspects by deploying custom private/on-prem LLMs. AI teams customizing and deploying open source LLMs directly within their private environments are commonly referred to as “on-prem” LLMs. As a result, on-prem LLMs may offer a great deal of privacy and security over MaaS. However, they tend to present challenges related to performance and infrastructure cost that may prevent LLM adoption and scale within the enterprise. As such, they do introduce a new challenge of model performance & optimization on finite infrastructure. With custom private/on-prem LLMs, technology teams face the challenge of meeting consistent inference latency and inference throughput goals. Production LLMs can place a burden on existing finite infrastructure resources resulting in sub par inference performance. Poor inference performance can be prohibitive for an organization to take advantage of custom private/on-prem LLMs and also delay the time to value of the LLM solution. In addition to offering a unified framework for managing and monitoring LLMs, Wallaroo enables enterprises working with private/on-prem LLMs to optimize performance on existing infrastructure in addition to simplifying the deployment and monitoring of those LLMs. Custom LLM Performance Optimization Llama.cpp and vLLM are two versatile and innovative frameworks for optimizing LLM inference. Let’s look at how these frameworks, integrated within Wallaroo, can help technology teams achieve optimal inference performance for custom LLMs On-Prem. Llama.cpp Llama.cpp is known for its portability and efficiency designed to run optimally on CPUs and GPUs without requiring specialized hardware. It is a lightweight framework which makes it ideal for technology teams launching LLMs on smaller devices and local On-Prem machines such as Edge use case scenarios. It is also a versatile framework that provides extensive customization options, allowing technology teams to fine-tune various parameters to suit specific needs. These two capabilities deliver control and versatility for custom LLMs on-prem that technology teams don’t enjoy with managed inference endpoints. Let’s look in detail at how a team would deploy a custom LLM into their on-prem infrastructure and optimize the resources available to them using Llama.cpp and vLLM with Wallaroo. To begin with, both Llama.cpp and vLLM are deployed in Wallaroo using the Bring Your Own Predict (BYOP) framework. The BYOP framework allows organizations to use pre-defined Python templates and supporting libraries to build and configure custom inference pipelines that can be auto-packaged natively in Wallaroo. This means that teams have added control over inference microservices creation for any type of use case, while using a native method and with a great deal of infrastructure abstraction. Deploying LLama.cpp with the Wallaroo BYOP framework requires Llama-cpp-python. This example uses Llama 70B Instruct Q5_K_M for testing and deploying Llama.cpp. Llama.cpp BYOP Implementation Details 1.) To run Llama-cpp-python on GPU, llama-cpp-python is installed using the subprocess library in python, straight into the Python BYOP code: import subprocess import sys pip_command = ( f'CMAKE_ARGS="-DLLAMA_CUDA=on" {sys.executable} -m pip install llama-cpp-python' ) subprocess.check_call(pip_command, shell=True) 2.) The model is loaded via the BYOP’s _load_model method, which supports the biggest context and offloads all the model’s layers to the GPU: def _load_model(self, model_path): llm = Llama( model_path=f"{model_path}/artifacts/Meta-Llama-3-70B-Instruct.Q5_K_M.gguf", n_ctx=4096, n_gpu_layers=-1, logits_all=True, ) return llm 3.) The prompt is constructed based on the chosen model as an instruct-variant: messages = [ { "role": "system", "content": "You are a generic chatbot, try to answer questions the best you can.", }, { "role": "user", "content": prompt}, ] result = self.model.create_chat_completion( messages=messages, max_tokens=1024, stop=["<|eot_id|>"] ) 4.) The deployment configuration sets what resources are allocated for the Llama.cpp LLM’s use. For this example, the Llama.cpp LLM is allocated 8 cpus, 10 Gi RAM, and 1 GPU. deployment_config = DeploymentConfigBuilder() \ .cpus(1).memory('2Gi') \ .sidekick_cpus(model, 8) \ .sidekick_memory(model, '10Gi') \ .sidekick_gpus(model, 1) \ .deployment_label("wallaroo.ai/accelerator:a100") \ .build() vLLM In contrast to Llama.cpp, vLLM focuses on ease of use and performance, offering a more streamlined experience with fewer customization requirements. vLLM leverages GPU acceleration to achieve higher performance making it more suitable for environments with access to powerful GPUs. vLLM delivers the following competitive features: Ease of use: One of vLLM’s primary design decisions is user-friendliness, making it more accessible to technology teams with different levels of expertise. vLLM provides a straightforward setup and configuration process for quick development. High Performance: vLLM is optimized for high performance, leveraging advanced techniques such as: PagedAttention to maximize inference speed. Tensor Parallelism enables efficient distribution of computations across multiple GPUs. This results in faster responses and higher throughput, making it the perfect choice for demanding applications. Scalability: vLLM is built with scalability in mind by deploying any LLM on a single or multiple GPUs. This scalability makes it suitable for both small-scale and large-scale deployments. In this vLLM BYOP implementation example the Llama 3 8B Instruct is used for this example of deploying a vLLM. 1.) To run vLLM on CUDA, vLLM is installed using the subprocess library in python, straight into the Python BYOP code: import subprocess import sys pip_command = ( f'{sys.executable} -m pip install https://github.com/vllm-project/vllm/releases/download/v0.5.2/vllm-0.5.2+cu118-cp38-cp38-manylinux1_x86_64.whl --extra-index-url https://download.pytorch.org/whl/cu118' ) subprocess.check_call(pip_command, shell=True) 2.) The model is loaded via the BYOP’s _load_model method and setting model weights that are found here. def _load_model(self, model_path): llm = LLM( model=f"{model_path}/artifacts/Meta-Llama-3-8B-Instruct/" ) return llm 3.) The deployment configuration sets what resources are allocated for the vLLM’s use. For this example, the vLLM is allocated 4 CPUs, 10 Gi RAM, and 1 GPU. deployment_config = DeploymentConfigBuilder() \ .cpus(1).memory('2Gi') \ .sidekick_cpus(model, 4) \ .sidekick_memory(model, '10Gi') \ .sidekick_gpus(model, 1) \ .deployment_label("wallaroo.ai/accelerator:a100) \ .build() We can see from both Llama.cpp and vLLM examples that Llama.cpp brings portability and efficiency, designed to run optimally on CPUs and GPUs without any specific hardware. While vLLM brings user-friendliness, rapid inference speeds, and high throughput, making it an excellent choice for projects that prioritize speed and performance. Technology teams have the flexibility, versatility, and control to optimize deployment of custom LLM models to their limited infrastructure across CPUs and GPUs using the Llama.cpp and vLLM frameworks. Inference Latency and Throughput Optimizations The primary performance challenge technology teams face with launching custom LLMs on-prem is optimizing inference latency and throughput at scale, all within the bounds of a pre-defined “infrastructure budget”. Teams can take control of these performance metrics in Wallaroo through implementing configurations for Autoscaling and Dynamic Batching. Autoscaling Autoscaling aims at reducing latency for LLM inference requests by automatically adding resources and scaling them down, without manual intervention, with pre-configured scaling triggers based on the size of the incoming request queue. Autoscale triggers provide LLMs greater flexibility by: Increasing resources to LLMs based on scale up triggers. This decreases inference latency when more requests come in, then spins idle resources back down to save on costs. Controlling the allocation of resources by configuring autoscaling windows to prevent sudden or volatile resources spikes and drops. Autoscaling batch and real-time inference requests helps to reduce unnecessary infrastructure and operations costs by providing automatic adjustment of computational resources to handle fluctuating AI inference traffic. Autoscaling ensures that the LLM infrastructure can dynamically scale up or down to meet the current load without latency issues or over-provisioning valuable resources without manual intervention. Autoscaling efficiently handles fluctuating workloads by automatically increasing or decreasing computational resources (such as CPU, GPU, or memory) based on the current demand. This means that LLM production inference is operating at optimal cost to the business and not burning through expensive resources unnecessarily which in turn positively impacts time to value for the business. The following example shows a deployment configuration made for CPUs where the DS sets the LLM resource allocations and the behavior for triggering autoscale for the LLM. Resource Allocation Behavior Sets resources to the LLM deployment_cpu_based with the following allocations: Replica autoscale: 0 to 5. Wallaroo engine per replica: 1 cpu, 2 Gi RAM llm per replica: 30 cpus 10 Gi RAM scale_up_queue_depth: 1 scale_down_queue_depth: 1 Wallaroo engine and LLM scaling is 1:1. Scaling up occurs when the scale up queue depth is above 5 over 300 seconds. Scaling down is triggered when the 5 minute queue average is < 1. This is implemented with the following code: deployment_cpu_based =wallaroo.DeploymentConfigBuilder() .replica_autoscale_min_max(minimum=0, maximum=5) .cpus(1).memory('2Gi') .sidekick_cpus(llm, 30) .sidekick_memory(llm, '10Gi') .scale_up_queue_depth(1) .scale_down_queue_depth(1) .build() Learn more from the following documentation as well as setting the Autoscale trigger for GPU scenarios. Autoscaling LLMs with Wallaroo Dynamic Batching Dynamic Batching helps optimize inference throughput by aggregating multiple incoming requests into a single batch, which are then processed together. This not only improves inference throughput but also optimizes utilization of limited resources helping to avoid incurring additional cloud or hardware costs. When multiple inference requests are sent from one or multiple clients, a Dynamic Batching Configuration accumulates those inference requests as one “batch”, and processes them at once. This increases efficiency and inference results performance by using resources in one accumulated batch rather than starting and stopping for each individual request. Once complete, the individual inference results are returned back to each client. The benefits of Dynamic Batching are multi-fold from higher throughput, improved hardware utilization, and reduced latency both for batch and real-time inference workloads, all leading to cost efficiency and efficient infrastructure utilization. In Wallaroo, Dynamic Batching of inferences is triggered when either the max batch delay OR batch size target are met. When either of those conditions are met, inference requests are collected together then processed as a single batch. When Dynamic Batching is implemented, the following occurs: Inference requests are processed in FIFO (First In First Out) order. Inference requests containing batched inputs are not split to accommodate dynamic batching. Inference results are returned back to the original clients. Inference result logs store results in the order the inferences were processed and batched. Dynamic Batching Configurations and target latency are honored and are not impacted by Wallaroo pipeline deployment autoscaling configurations. Dynamic batching in Wallaroo can optionally be configured when uploading a new LLM or retrieving an existing one. The Dynamic Batch Config takes the following parameters. Maximum batch delay: Set the maximum batch delay in milliseconds. Batch size target: Set the target batch size; can not be less than or equal to zero. Batch size limit: Set the batch size limit; can not be less than or equal to zero. This is used to control the maximum batch size. E.g.: dynamic_batch_config = wallaroo.dynamic_batching_config.DynamicBatchingConfig() .max_batch_delay_ms(5) .batch_size_target(1) .batch_size_limit(1) .build() The following demonstrates applying the dynamic batch config at LLM upload. llm_model = (wl.upload_model(model_name, model_file_name, framework=framework, input_schema=input_schema, output_schema=output_schema) .configure(input_schema=input_schema, output_schema=output_schema, dynamic_batching_config=dynamic_batch_config) ) Learn more: Dynamic Batching for LLMs with Wallaroo Conclusion While Managed Inference Endpoints (MaaS) can appear to offer an easy seamless solution for enterprises to deploy LLM inference services, there is a lack of control that AI technology teams in organizations have over data security & privacy and cost in certain cases. In scenarios where these concerns prevent enterprises from launching and scaling LLMs in production, AI teams may choose to customize and deploy open source LLMs directly within their private environments. These LLMs are commonly referred to as “on-prem” LLMs. As a result, on-prem LLMs may offer a great deal of privacy and security over MaaS. However, they tend to present challenges related to performance and infrastructure cost that may prevent LLM adoption and scale within the enterprise. Deploying custom LLMs on-prem with Wallaroo helps AI technology teams take back control of the above factors. While launching custom LLMs brings new challenges to the table in the form of performance optimization over inference latency and throughput, these challenges can efficiently and effectively be overcome in Wallaroo through using the Llama.cpp and vLLM frameworks in conjunction with the techniques of Autoscaling and Dynamic Batching. Through having control over performance optimization with existing resources in Wallaroo, technology teams in enterprises can operate with confidence to launch extensible and efficient custom LLM solutions in production. The Wallaroo AI Inference Platform enables enterprise technology teams to quickly and efficiently operationalize custom LLMs at scale through an end-to-end LLM lifecycle from deployment, and ongoing model management with full governance and observability while realizing optimal performance scaling across Ampere, x64 & GPU architectures in the cloud, on-prem and at the edge. Learn More Wallaroo on Azure Marketplace: Wallaroo AI inference platform - Community Edition Wallaroo.AI Inference Server Free Edition Wallaroo AI Inference platform Video: Deploying LLM Inference Endpoints & Optimizing Output with RAG in Wallaroo Monitoring LLM Inference Endpoints with Wallaroo LLM Listeners Contact Us LLM Documentation View the full article -
Hello all. I am having an issue with using OneDrive on Mac. If I open a file, say a Word doc or PowerPoint, on OneDrive through Finder, the Autosave feature will not be on by default. If I do the same action on Windows through file explorer, it opens with Autosave turned on. When I try to turn on Autosave in Word, it prompts me to upload a new version onto OneDrive. This would make a new copy. And that new copy, if opened from Finder, still will not have autosave turned on. However, when I open the file from within the Word application or Excel or PowerPoint, autosave is turned on automatically. I would like for files opened from my OneDrive folder in Finder to open with Autosave turned on by default. This is how it works on Windows. View the full article
-
Research Drop in Brief: Across 729 customers, the top 20 taxonomy items included on employee surveys remained relatively stable between 2022-23 and 2023-24. eSat, our single-item engagement metric (How happy are you working at <COMPANY_NAME>?), was the most frequently utilized item across customers. All six of the People Success elements are represented among the top utilized survey items, illustrating that organizations are capturing a holistic view of the employee experience. Heading into 2025, we identified three underutilized survey items with direct links to business and employee outcomes that can offer a potential edge in the year ahead: Role Clarity, Career Paths, and Speak My Mind. #Flexibility #DEI #SkillsEconomy #AI – we are constantly presented with new trends and topics to focus on in the employee experience space that are aligned with societal influences, technology advancements, or economic changes. But are these trends reflected in the most frequently used employee survey items? How much variance do we see in the top utilized items year-over-year? To better understand these questions, we turned to our customers to learn what items they have been prioritizing on their surveys over the past two years. The answer? Of the top 20 most utilized items between 2022-23 and 2023-24, 95% of them are the same. In essence, there seems to be a core set of employee experiences and outcomes that customers across industries are consistently measuring, forming a foundational sense of employee sentiment to track and evaluate year-over-year (YoY). Beyond the top 20 most utilized items, we start to see some changes in item utilization, with items varying up to 5 percentage points in more or less customer utilization YoY. These shifts show the flexibility within employee surveys for organizations to seek feedback on experiences that may be unique to their context or to trends that are particularly relevant to their industry or size. In the top 20 most utilized items, one item broke into the top 20 in 2023-24, up 5 spots from the 25th most utilized in 2022-23. This was our Wellbeing item (<COMPANY_NAME> takes a genuine interest in the employees' well-being). Employee wellbeing is a multi-faceted experience and continues to gain traction as a top priority for organizations and employees, and we see this reflected in the increase of customer utilization in the past year. More organizations are recognizing the tangible benefits of employee wellbeing at work1, and the U.S. Surgeon General released a framework for workplace mental health & wellbeing, underscoring systemic support for holistic employee health2. With the uptick in Wellbeing utilization, we see an example of societal changes reflected in topics of focus for our customers. Employee engagement remains the north star, but retention intentions remain relatively unexplored We recommend that your employee surveys include at least one outcome item – something to capture whether the overall employee experience promoted thriving. The current analysis showed that the most frequently utilized survey item year-over-year is eSat, our employee engagement item (How happy are you working at <COMPANY_NAME?), with 90% of customers including this item on their surveys. This aligns with our People Science recommendations, as eSat captures the end result we hope employees feel at work and has been validated as the strongest general predictor of talent and business outcomes of all 11 of our outcome items3. Our research shows that, on average, each additional point of engagement (via eSat) reported by employees correlated with a +$46,511 difference in market cap per employee4. Therefore, we are encouraged by the consistent utilization of this item by our customers as it can help them continue to measure and understand their organization’s engagement levels. Beyond engagement, our research has also shown that understanding the intended behaviors of your employees can provide a pivotal insight into whether they currently feel committed enough to the organization, their team, or their role to plan to stay onboard for the foreseeable future. Our Intent to Stay item (I plan to be working at <COMPANY_NAME> two years from now) captures this employee intention and we previously found that voluntary leavers who rate this item unfavorably (Strongly Disagree or Disagree) are 10x as likely to leave their organization as those who rate it favorably (Strongly Agree or Agree)5. However, we discovered that only 24% of our customers included this item in their surveys in the past year. With an ever-present competition for talent, especially within the context of AI workplace transformation6, it can be vital for organizations to get a pulse on the current long-term intentions of their employees. Keep an eye out for our January Research Drop where we will dive deeper into measuring and predicting employee retention! The top 10 most utilized driver items include all six People Success elements On the Viva People Science team, we ground our work in our foundational research on what makes employees happy and successful at work. We found six People Success elements that organizations can use to build their own thriving People Success culture: Wellbeing, Connection, Clarity, Empowerment, Growth, and Purpose (learn more here). Our research shows that these elements are imperative to a holistic and successful employee experience and when we took an even closer look, this time at the top ten driver items included on customer surveys last year (i.e., excluding outcome items such as eSat), we see that these items covered all six of the People Success elements. From this we can see the distribution of driver items across the People Success elements and that over half of customers integrated these top ten items into their surveys last year (even with over 200 Glint taxonomy items included in customer surveys in the same time frame). While every organization is different and has varying priorities that they need to include in their employee surveys, this broad-based distribution of People Success elements illustrates how Glint customers are not solely focused on one aspect of the employee experience, but instead taking a holistic approach to EX. Beyond the basics: Three metrics to gain an edge in 2025 For the most part, the foundational survey topics remain consistent year-over-year. But it can be helpful to plug-and-play new items into your employee surveys to continue to evolve your understanding of your employee base and their experience at your organization. To get you started thinking about topics to measure in the new year, we looked at our business outcome and engagement analysis to discover potential high-impact opportunities. Based on this, three items emerged as high value opportunities for survey inclusion: Role Clarity: I clearly understand what is expected of me in my role. Aligned with the People Success element of Clarity. Career Path: <MY_MANAGER> has meaningful discussions with me about my career development. Aligned with the People Success element of Growth. Speak My Mind: I feel free to speak my mind without fear of negative consequences. Aligned with the People Success element of Empowerment. The current utilization rate for these items ranges from 23-27%, indicating some customer use but room for growth. We may already be seeing an uptick in interest for these items heading into 2025, as Role Clarity and Career Path had the highest YoY increase in usage across customers (up 5 and 3 percentage points, respectively) across all driver items. In addition to predicting critical business and employee outcomes, research demonstrates the importance of these topics: Role clarity is a critical predictor of job performance, efficiency, intrinsic motivation, and innovative work behaviors7. Effective career conversations with your managers can impact employee retention, as companies that excel at internal mobility retain employees for 5.4 years, almost 2x longer than companies that struggle with internal mobility8. The Speak My Mind item gets at a component of Psychological Safety, which has a positive impact on turnover, stress, and productivity (Watch our recent webinar on Building Psychological Safety). Heading into 2025, these three survey items could deepen your understanding of your employees’ experience, especially as the workforce continues to undergo an AI transformation and many employees’ job tasks and career paths are being reimagined. Employees may feel uncertain about their changing role expectations and development opportunities, and anxious about sharing their feelings surrounding the changes. Providing your employees with the opportunity to share their perspective can be a differentiator in maintaining a thriving workforce amidst ongoing changes. Consider whether including one or more of these items on your surveys in 2025 would help you be better informed on your employees’ experience! The bottom line: find the balance between consistency and flexibility What we learned from this month’s research drop was that the topics that are top-of-mind for our customers have been consistent, focusing on a wide range of items spanning the six People Success elements. While this stability is helpful in being able to track employee sentiments over time, there is an opportunity to explore additional topics that may be surfacing for your employees. Try not to get caught up in flashy trends and instead focus on experiences that your employees may not have had the opportunity to provide feedback on in the past that could be critical for your organization’s strategy. But don’t forget – the most important aspect of employee surveying is acting on the feedback you receive. Want to learn more about action taking? Check out our September Research Drop on Empowering Managers to Take Action on Survey Results. Stay tuned for our January Research Drop to keep up with what the Viva People Science team is learning! The analysis for this month’s research drop examined LinkedIn Glint customer item utilization between two annual time frames, September 2022 to August 2023 (n = 729 customers) and September 2023 to August 2024 (n = 728 customers). Only items in the Glint taxonomy were included in this analysis, therefore custom items were not analyzed. 1 Forbes. (July 30, 2024). The business case for mental health: Investing in employee well-being. 2 U.S. Department of Health and Human Services. Workplace Mental Health & Wellbeing. 3 This research draws from employee survey responses from 1,000+ customers, collected from January 1, 2022, to December 31, 2022. The eSat item was utilized in various tests (e.g., multiple regression) to validate its impact on talent outcomes (e.g., attrition, performance) and business outcomes (e.g., stock return, market cap per employee). 4 Microsoft WorkLab. (April 20, 2023). The new performance equation in the age of AI. 5 Cross-customer attrition analysis (n = 33 customers) from June 2023 examining the proportional difference of all employees who have voluntarily left their companies whose item scores were unfavorable compared to those whose scores were favorable. 6 Mercer. (December 11, 2024). Future of work: 2024 global talent trends. 7 Kundu, S. C., Kumar, S., & Lata, K. (2020). Effects of perceived role clarity on innovative work behavior: a multiple mediation model. RAUSP Management Journal, 55(4), 457-472. 8 LinkedIn. (2022). Workplace Learning Report: The Transformation of L&D View the full article
-
Ahora todos pueden usar GitHub Copilot GRATIS en Visual Studio Code. Sólo necesitas tu cuenta de GitHub. Sin periodos de pruebas, sin tarjeta de crédito. Comienza con GitHub Copilot hoy, gratis. A partir de hoy, tú y otros más de 30 millones de usuarios de Visual Studio Code obtienen acceso, sin costo alguno, a la herramienta de productividad más potente para desarrolladores que hayamos visto en nuestra vida. Solo tienes que seguir estos pasos. Cuando abras Visual Studio Code, verás esto: Accedes con tu cuenta de GitHub y listo, ya puedes utilizar GitHub Copilot ¡Lo puedes activar de forma muy simple! Y con eso, tú y toda la comunidad de usuarios de Visual Studio Code pueden acceder a la herramienta de productividad más increíble, completamente gratis. ¿Qué obtienes con este nivel gratuito? Acceso a modelos GPT 4o y Claude Ventana de contexto: 64k hoy, con 128K próximamente. 2000 acciones de completar código (por mes) Completado de Código: Cada vez que GitHub Copilot sugiere código mientras escribes. 50 solicitudes de Chat (por mes) Solicitud de Chat: Cualquier cosa que no sea una sugerencia de completado (Chat, Inline Chat, mensajes de commit, etc.) Hemos diseñado estas asignaciones para que sean lo suficientemente generosas para la mayoría de los usuarios individuales y proyectos personales. Si necesitas más, el plan individual es ilimitado y sigue costando sólo 10 USD al mes (incluyendo el acceso a modelos o1 o Gemini). Visual Studio Code y GitHub Copilot Si nunca has usado GitHub Copilot, o si lo probaste hace tiempo, mucho ha cambiado en los últimos dos años. Hagamos un repaso rápido de cómo empezó GitHub Copilot y lo que hoy puede hacer por ti, gratis. Visual Studio Code nació con la misión de ser un editor completo, portable y con enfoque web, encontrando su lugar como el editor más popular del mundo gracias a la comunidad. GitHub Copilot se lanzó en junio de 2021, fruto del trabajo conjunto de Microsoft, GitHub y OpenAI, ofreciendo completados de código revolucionarios. Con la aparición de ChatGPT, quedó claro que la IA abría oportunidades impensables. Decidimos integrar la IA como un pilar central de la experiencia en VS Code. Hoy, la IA es un principio fundamental en el diseño y evolución del editor. La experiencia de IA en VS Code VS Code entiende tu proyecto, y GitHub Copilot también. GitHub Copilot conoce tu repositorio sin subirlo a ningún otro lado y capta incluso tus cambios sin confirmar. Puedes activar el contexto de todo tu proyecto con “@workspace” y elegir tu modelo (Claude, 4o gratis; o1, Gemini en plan de pago). Además del chat, tenemos la opción de chat inline, la integración con terminal, el modo “Copilot Edits” que crea/modifica archivos, soporte por voz, instrucciones personalizadas, acciones orientadas a tareas (mensajes de commit, sugerencias de nombres) y extensibilidad total para integrarlo con otras extensiones. Todo esto hace de que Visual Studio Code sea una plataforma de programación asistida por IA verdaderamente integral. ¿Y qué hay de Visual Studio? GitHub Copilot no es exclusivo de Visual Studio Code. Si usas Visual Studio, también obtendrás una oferta gratuita. Consulta el blog del equipo de Visual Studio para más detalles sobre lo que ya funciona y lo que está por llegar. Reinventando la experiencia para los programadores GitHub Copilot avanza tan rápido que el mayor reto es mantenerte al día. Lanzamos una nueva versión de Visual Studio Code cada mes, cada una con mejoras para GitHub Copilot. Síguenos en X o Bluesky, o revisa las notas de la versión cuando VS Code te pida actualizar. 2025 será un año enorme para GitHub Copilot, ahora parte central de la experiencia en VS Code. Únete a nosotros en este viaje para redefinir, otra vez, ¡la forma en la que trabajamos creando aplicaciones! Comienza con GitHub Copilot hoy, gratis. Happy Coding! View the full article
-
Há dois anos, o GitHub Copilot revolucionou a produtividade dos desenvolvedores com o poder da Inteligência Artificial. Hoje, estamos entusiasmados em anunciar que mais 30 milhões de pessoas ao redor do mundo podem usar o GitHub Copilot GRATUITAMENTE no Visual Studio Code. Tudo o que você precisa é de uma conta no GitHub. Sem períodos de teste, sem necessidade de cartão de crédito. Comece a usar o GitHub Copilot hoje, gratuitamente Ao abrir o Visual Studio Code, você encontrará a opção de utilizar o GitHub Copilot gratuito. E com isso, você e toda a comunidade de usuários do Visual Studio Code podem acessar a ferramenta de produtividade mais incrível, totalmente gratuita. O que você ganha com este nível gratuito? 2000 Auto completes de Código (Por Mês) Auto complete de código: Sempre que o Copilot sugerir código no seu editor enquanto você estiver digitando. 50 requisições de chat (Por Mês) Requisições de Chat: Qualquer coisa que não seja uma conclusão de código. Isso inclui Chat, Chat Inline, geração de mensagens de commit, etc. Acesso aos modelos GPT-4 e Claude Janela de Contexto: 64 mil hoje, com 128 mil sendo lançados em breve Projetamos essas atribuições para serem generosas o suficiente para cobrir a maioria dos usuários individuais e projetos pessoais. Se você precisar de mais, o plano individual é ilimitado e ainda custa apenas US$ 10 por mês. Além disso, você obtém algumas capacidades extras, como a opção de usar os modelos o1 ou Gemini. Visual Studio Code e GitHub Copilot Se você nunca usou o GitHub Copilot, ou se o testou há algum tempo, muita coisa mudou nos últimos dois anos. Vamos fazer um rápido resumo de como o GitHub Copilot começou e o que ele pode fazer por você hoje, gratuitamente. O Visual Studio Code nasceu com a missão de ser um editor completo, portátil e com foco na web, encontrando seu lugar como o editor mais popular do mundo graças à comunidade. O GitHub Copilot foi lançado em junho de 2021, fruto do trabalho conjunto da Microsoft, GitHub e OpenAI, oferecendo sugestões de código revolucionárias. Com a chegada do ChatGPT, ficou claro que a IA abria oportunidades impensáveis. Decidimos integrar a IA como um pilar central da experiência no VS Code. Hoje, a IA é um princípio fundamental no design e evolução do editor. A experiência de IA no Visual Studio Code O VS Code entende seu projeto, e o GitHub Copilot também. O GitHub Copilot conhece seu repositório sem precisar enviá-lo para outro lugar e capta até mesmo suas mudanças não confirmadas. Você pode ativar o contexto de todo o seu projeto com "@workspace" e escolher seu modelo (Claude, 4o gratuito; o1, Gemini no plano pago). Oferecemos chat, chat inline, integração com o terminal, modo "Copilot Edits" para criar e modificar arquivos, suporte por voz, instruções personalizadas, ações orientadas a tarefas (como mensagens de commit e sugestões de nomes) e total extensibilidade para integração com outras extensões. Isso torna o Visual Studio Code uma plataforma de programação assistida por IA completa. E o Visual Studio? 🤔 O GitHub Copilot não é exclusivo do Visual Studio Code. Se você usa o Visual Studio, também terá uma oferta gratuita. Consulte o blog da equipe do Visual Studio para mais detalhes sobre as funcionalidades atuais e as novidades que estão por vir. Reinventando a experiência para developers O GitHub Copilot está em constante evolução e, mensalmente, lançamos uma nova versão do Visual Studio Code, trazendo melhorias significativas para o GitHub Copilot. Fique por dentro das novidades nos seguindo no X ou Bluesky, ou verifique as notas de versão quando o Visual Studio Code solicitar uma atualização. 2025 será um ano enorme para o GitHub Copilot, agora parte central da experiência no Visual Studio Code. Junte-se a nós nesta jornada para redefinir, mais uma vez, como escrevemos código. Comece a usar o GitHub Copilot hoje, gratuitamente View the full article
-
This new Planner, which was already live on Teams but is now updated for the web, brings together To Do, Planner, and Project. Today marks the general availability of the new Planner for the web, which can be accessed at planner.cloud.microsoft. The transition to this new Planner will occur over the next few months. One of the key benefits of this new Planner is that it uses the same codebase, allowing new features to be released faster based on user feedback. New feature: Portfolios, milestones and baselines Some of the new features include managing multiple plans together through portfolios. This provides a unified overview of the status for all your plans and allows you to track tasks across multiple plans in a single timeline. You can also share portfolios with teammates. Additionally, the new Planner includes baselines in project management, project level variance, task level variance, critical path, and other insights. A roadmap with milestones is also available. A whiteboard feature is enabled within Planner as a tab, allowing you to convert ideas added on post-its into tasks. The Loop feature in Planner enables collaboration on tasks by creating a separate workspace for this purpose. Copilot in Planner Copilot in Planner can create tasks and goals, helping to manage costs and drive revenue growth. A great demo showed how an email can be used to create tasks from its content. Copilot can also build your plan by creating tasks and buckets. A prompt library is available to create, understand, edit, and ask about tasks. Very cool stuff! You can ask Copilot to plan for your next project and it will start generating the work breakdown. Project Manager Agent The Project Manager Agent was announced at Ignite as one of the five agents. When you create a new project manager plan, it will create a plan with a group. The SharePoint agent, which lives on the SharePoint site behind the scenes, will take care of grounding. There are currently 42 agents behind the Project Manager Agent, which are extensible for partners. These agents look at goals, break down tasks, and perform various mini-tasks while communicating with each other. As AI technology continues to evolve, we can expect even more advanced features and capabilities to be integrated into Planner. You can assign tasks to the Project Manager Agent to complete the work, although only a human can mark a task as complete. The Project Manager Agent is currently rolling out in public preview. The new Project Manager homepage and project manager board view are also being introduced. The Multi-Agent Runtime Service (MARS) is a service that uses the right agent to complete specific tasks when needed. Coming soon are status reports, where you can choose your reporting month and report goal, and create a prompt. These reports will be created in a Loop page. There was so much goodness here in the session, and I am looking forward to see what will come next! Beyond ESPC24, continue the learning... The following Microsoft Ignite session recording takes you a little deeper into "Boost productivity with Copilot in Microsoft 365 apps". Watch it now: Also, check out the related Planner blog published during Ignite, "Orchestrating human-AI collaboration in Microsoft Planner" Cheers and happy task'ing, Marijn Somers View the full article
-
I've created a GA report filtered by site (path begins with /sites/sitename) and see drastically different numbers in SPO and GA. Both filters are 7 daysOne page has 29 unique viewers in SPO, 14 in GASome pages are omitted in GAPages from other sites have 1 hit (is GA counting exit pages?) View the full article