-
Posts
5720 -
Joined
-
Last visited
Content Type
Profiles
Forums
Events
Articles
Downloads
Everything posted by Windows Server
-
I'm upgrading my server from 2012R2 to 2019, then 2022, and I'm getting stopped by the error shown. I've looked through the entire registry and on the Disk, and NOWHERE is there anything referencing this service. It doesn't show in services, and I've tried to reinstall it with the Server Manager, but there's nothing in the roles or features to install it. (Obviously, I tried the uninstall option with nothing found to uninstall it).How do I get it removed?View the full article
-
Yubi Keys used:2 x YubiKey 5C NFC Firmware version 5.4.32 x YubiKey 5C NFC Firmware version 5.7.12 x YubiKey BIO - FIDO Edition Firmware version 5.6.3Response is the same with all of these YubiKeysThis issue does not occur using Windows 11 Pro with multiple browsers.1 Windows 11 Pro PC (Version 24H2) (OS build 26100.2605) w/Windows Feature Experience Pack 1000.26100.36.0I have this issue with Windows 10 Pro across multiple machines and browsers.2 Windows 10 Pro PCs with different hardware the same software (Version 22H2), (OS build 19045.5247) w/(Windows Feature Experience Pack 1000.19060.1000View the full article
-
Hi everyone I need to download a bunch of files from a website: https://www.finra.org/finra-data/browse-catalog/equity-short-interest/files The address doesn't show the filters that need to be applied. If you go to that website and select 'Any' for both Month and Year then you will see all the files. Can someone help me with creating the PowerShell script to download all the files to a local folder on my machine? Thank you cc: LainRobertson View the full article
-
Prezado, boa tarde!Quando entro em reunião, eu falo mas o microfono não funciona.Só consigo ouvir os participantes, mas eles não me escutam.View the full article
-
Introduction The emergence of GenAI and services associated with it such as ChatGPT, Gemini, etc, is creating a scenario where enterprises are feeling pressure to quickly implement GenAI/LLM solutions to make sure they are not left behind competitively in the race towards broad enterprise GenAI adoption. This urgency has trickled down to the technology teams in these enterprises with pressure to rapidly create and implement GenAI/LLM-enabled products and solutions. One GenAI/LLM low barrier to entry solution for enterprise technology teams is Managed Inference Endpoint or MaaS/LLMaaS (Model as a Service or LLM as a Service). MaaS/LLMaaS are cloud-hosted services designed to simplify deploying, and scaling, LLMs for inference. The appeal for enterprises seeking to initiate production LLMs is that LLMaaS (LLM as a Service) provides a production ready infrastructure that takes care of all the deployment and scaling complexities for production LLMs. While LLMaaS is a hands-off solution, effective and reliable production LLMs are highly dependent on the accuracy of the generated output for its consumers. All models decay over time and require model governance to ensure they are performing optimally for the use case they are being used for. In the case of LLMs, model decay or drift can have negative effects through hallucinations and bias leading to lack of trust, integrity, satisfaction, compliance, and potentially legal fallout. Proactively monitoring models for hallucinations and bias is one of the challenges that prevents Enterprises from launching LLMs effectively and reliably in production. Enterprises that have offloaded LLM deployment and inference to LLMaaS can still retain control to mitigate hallucinations and bias through model monitoring and validation techniques such as Retrieval-Augmented Generation (RAG), LLM guardrails, and “LLM as a judge” to monitor and control the output of LLM applications. While those solutions may be effective, can they be implemented efficiently alongside LLMaaS endpoints? Does the complexity of implementing these solutions end up alienating Data scientists and/or ML Engineers whose skills are needed to monitor and manage LLMs? Best Practices for Managed Inference Endpoints Performance There are a couple of methods that Data Scientists can do to mitigate this hallucinations and bias challenge. Retrieval-Augmented Generation (RAG) Retrieval-Augmented Generation (RAG) is one method that helps LLMs to produce more accurate and relevant outputs, effectively overcoming some of the limitations inherent in their training data. RAG not only enhances the reliability of the generated content but also ensures that the information is up-to-date, which is crucial for enhancing user trust and delivering accurate responses while adapting to constantly changing information. RAG is also a good low cost alternative to fine tuning the model. Fine tuning models is expensive because of intensive resource consumption and it also produces diminishing returns for accuracy when compared to RAG. RAG works by improving the accuracy and reliability of LLMs by allowing the model to reference an authoritative knowledge base outside of its training data sources before generating a response. RAG LLM creates an up to date authoritative source for the model and can quickly incorporate the latest data and provide accurate up-to-date responses for end users. The RAG LLM process takes the following steps: Input text first passes through the feature extractor model that outputs the embedding. This is a list of floats that the RAG LLM uses to query the database for its context. Both the embedding and the origin input is passed to the RAG LLM. The RAG LLM queries the vector indexed database for the context from which to build its response. As we have discussed above this context prevents hallucinations by providing guidelines that the RAG LLM uses to construct its response. Once finished, the response is submitted as the generated text back to the application. An example of RAG in action In the following example, inference requests are submitted either as pandas DataFrames or Apache Arrow tables. The following example shows submitting a pandas DataFrame with the query to suggest an action movie. The response is returned as a pandas DataFrame, and we extract the generated text from there. data = pd.DataFrame({"text": ["Suggest me an action movie, including its name"]}) result = pipeline.infer(data, timeout=10000) result['out.generated_text'].values[0] This results in the following output text. 1. "The Battle of Algiers" (1966) - This film follows the story of the National Liberation Front (FLN) fighters during the Algerian Revolution, and their struggle against French colonial rule. 2. "The Goodfather" (1977) - A mobster's rise to power is threatened by his weaknesses, including his loyalty to his family and his own moral code. 3. "Dog Day Afternoon" (1975) - A desperate bank clerk turns to a life of crime when he can't pay his bills, but things spiral out of control. Learn More: Retrieval-Generated LLMs with Wallaroo Wallaroo LLM ListenersTM There may be certain use cases or compliance and regulatory rules that restrict the use of RAG. In such scenarios LLM accuracy and integrity can still be accomplished through the validation and monitoring components with Wallaroo LLM ListenersTM. With the shift to LLMs, together with our customers we came up with the concept of an LLM Listener which is essentially a set of models that we build and offer off the shelf that can be customizable to detect and monitor certain behaviors such as toxicity, harmful language etc. For example you may be looking to generate an alert for poor quality responses immediately or even autocorrect that behavior from the LLM that can be done in-line. If needed this can also be utilized offline if you're looking to do some further analysis on the LLM interaction. This is especially useful if it's something that is done in a more controlled environment. For example, you can be doing this in a RAG setting and add these validation and monitoring steps on top of that to help further improve generated text output.. The Wallaroo LLM ListenersTM can also be orchestrated to generate real-time monitoring reports and metrics to understand how your LLM is behaving and ensure that it's effective in production which helps shorten the time to value for the business. You can also iterate on the LLM Listener and keep the endpoint static while everything that happens behind it can remain fluid to allow AI teams to iterate quickly on the LLMs without impacting the bottom line which could be your business reputation, revenue costs, customer satisfaction, ROI, etc. Fig-1 The Wallaroo LLM ListenerTM approach illustrated above in Fig -1 is implemented as follows: 1: Input text from application and corresponding generated text. 2: The input is processed by your LLM inference endpoint. 3: Wallaroo will log the interactions between the LLM inference endpoint and your users in the inference results logs. Data Scientists can see the input text and corresponding generated text from there. 4: The inference results logs can be monitored by a suite of listener models which can be anything from standard processes to other NLP models that are monitoring these outputs inline or offline. Think of them as things like sentiment analyzers or even full systems that check against some ground truth. 5: The LLM listeners are going to score your LLM interactions on a variety of factors and can be used to start to generate automated reporting and alerts in cases where, over time, behavior is changing or some of these scores start to fall out of acceptable ranges. For example below, an inference is performed by submitting an Apache Arrow table to the deployed LLM and LLM Validation Listener, and displaying the results. Apache arrow tables provide low latency methods of data transmission and inference. Input text: text = "Please summarize this text: Simplify production AI for seamless self-checkout or cashierless experiences at scale, enabling any retail store to offer a modern shopping journey. We reduce the technical overhead and complexity for delivering a checkout experience that’s easy and efficient no matter where your stores are located. Eliminate Checkout Delays: Easy and fast model deployment for a smooth self-checkout process, allowing customers to enjoy faster, hassle-free shopping experiences. Drive Operational Efficiencies: Simplifying the process of scaling AI-driven self-checkout solutions to multiple retail locations ensuring uniform customer experiences no matter the location of the store while reducing in-store labor costs. Continuous Improvement: Enabling integrated data insights for informing self-checkout improvements across various locations, ensuring the best customer experience, regardless of where they shop." input_data = pa.Table.from_pydict({"text" : [text]}) pipeline.infer(input_data, timeout=600) pyarrow.Table time: timestamp[ms] in.text: string not null out.generated_text: string not null out.score: float not null check_failures: int8 ---- time: [[2024-05-23 20:08:00.423]] in.text: [["Please summarize this text: Simplify production AI for seamless self-checkout or cashierless experiences at scale, enabling any retail store to offer a modern shopping journey. We reduce the technical overhead and complexity for delivering a checkout experience that’s easy and efficient no matter where your stores are located.Eliminate Checkout Delays: Easy and fast model deployment for a smooth self-checkout process, allowing customers to enjoy faster, hassle-free shopping experiences. Drive Operational Efficiencies: Simplifying the process of scaling AI-driven self-checkout solutions to multiple retail locations ensuring uniform customer experiences no matter the location of the store while reducing in-store labor costs. Continuous Improvement: Enabling integrated data insights for informing self-checkout improvements across various locations, ensuring the best customer experience, regardless of where they shop."]] out.generated_text: [[" Here's a summary of the text: This AI technology simplifies and streamlines self-checkout processes for retail stores, allowing them to offer efficient and modern shopping experiences at scale. It reduces technical complexity and makes it easy to deploy AI-driven self-checkout solutions across multiple locations. The system eliminates checkout delays, drives operational efficiencies by reducing labor costs, and enables continuous improvement through data insights, ensuring a consistent customer experience regardless of location."]] out.score: [[0.837221]] check_failures: [[0]] The following fields are output from the inference: out.generated_text: The LLM’s generated text. out.score: The quality score. In addition, we also have the ability to deploy Wallaroo LLM ListenersTM in line to ride alongside the LLM and actually give it the ability to suppress outputs that violate set thresholds from being returned to the user in the first place. Learn More: LLM Validation with Wallaroo LLM ListenersTM Conclusion We have seen that Managed Inference Endpoints may not always be the happy path to GenAI/LLM nirvana for enterprises. Lack of control for model governance limits an organization's ability to build industrial-grade practices and operations to maximize the return out of their investment in LLMs. With Wallaroo, technology teams have control over model hallucinations and bias behavior can be taken back in house by the organization through implementation of methods such as RAG and Wallaroo LLM ListenersTM to ensure that production LLMs are up-to-date, reliable, robust and effective through implementing measures for monitoring metrics and alerts. Using RAG and Wallaroo LLM ListenersTM help mitigate potential issues such as toxicity, obscenity, etc. to avoid risks and provide accurate and relevant generated outputs. Technology teams that would like to extend this control to data security & privacy, can meet their requirements regardless of where the model needs to run with Wallaroo in their private Azure tenant using custom and on-prem LLMs. Wallaroo enables these technology teams to get up and running quickly with custom and on-prem LLMs, on their existing infrastructure, with a unified framework to package and deploy custom on-prem LLMs directly on their Azure infrastructure. In an upcoming blog, we will lay out some important considerations when deploying custom on-prem LLMs on your own infrastructure to ensure optimal inference performance. Learn More Wallaroo on Azure Marketplace Wallaroo AI inference platform - Community Edition Wallaroo.AI Inference Server Free Edition Wallaroo AI Inference platform Video: Deploying LLM Inference Endpoints & Optimizing Output with RAG in Wallaroo Monitoring LLM Inference Endpoints with Wallaroo LLM Listeners Contact Us LLM Documentation View the full article
-
Azure Database for MySQL - Flexible Server is built on the open-source MySQL database engine, and the service supports MySQL 8.0 and newer versions. This means that users can take advantage of the flexibility and advanced capabilities of MySQL’s latest features while benefitting from a fully managed database service. While newer versions and features can provide a lot of value, the recent issues identified with MySQL versions 8.0+ makes it important to be aware of potential risks that can occur during certain operations, particularly if you are making online schema changes. Issues with data loss and duplicate keys with Online DDL Online Data Definition Language (DDL) operations are a powerful feature in MySQL, enabling schema changes like ALTER TABLE or OPTIMIZE TABLE with minimal impact on table availability. These operations are designed to reduce downtime by allowing concurrent reads and writes during schema modifications, making them an essential tool for managing active databases efficiently. However, a recent post on the Percona blog, Who Ate My MySQL Table Rows? highlights critical risks associated with MySQL 8.0.x versions after 8.0.27 and all versions beyond 8.4.y. Specifically, the open-source INPLACE algorithm, commonly used for online schema changes, can lead to data loss and duplicate key errors under certain conditions. These issues arise from constraints in the INPLACE algorithm, particularly during ALTER TABLE and OPTIMIZE TABLE operations, exposing vulnerabilities that compromise data integrity and system reliability. These risks are called out in the following bug reports: Bug #115511: Data loss during online ALTER operations with concurrent DML Bug #115608: Duplicate key errors caused by online ALTER operations Documented issues related to the INPLACE algorithm (used for online DDL) can cause: Data Loss: Rows may be accidentally deleted or become inaccessible. Duplicate Keys: Indexes can end up with duplicate entries, leading to data consistency issues and potential replication errors. Problems arise when INPLACE operations, such as ALTER TABLE or OPTIMIZE TABLE, run concurrently with: DML operations (INSERT, UPDATE, DELETE): Modifications to table data during the rebuild. A purge activity: Background cleanup operations for old row versions in InnoDB. These scenarios can lead to anomalies resulting from race conditions and incomplete synchronization between concurrent activities. Impact on Azure Database for MySQL - Flexible Server Customers For Azure Database for MySQL Flexible Server customers using MySQL 8.0+ and all versions after 8.4.y, this issue is particularly critical as it affects: Data Integrity: During schema changes such as ALTER TABLE or OPTIMIZE TABLE run using the INPLACE algorithm, data rows may be lost or duplicated if these operations run concurrently with a DML activity (e.g., INSERT, UPDATE, or DELETE) or background purge tasks. This can compromise the accuracy and reliability of the database, potentially leading to incorrect query results or the loss of critical business data. Replication Instability: Duplicate keys or missing rows can interrupt replication processes, which rely on a consistent data stream across the primary and replica servers. These issues can arise when there are concurrent insertions into the table during schema changes, leading to data inconsistencies between the primary and replicas. Such inconsistencies may result in replication lag, errors, or even a complete breakdown of high-availability setups, requiring manual intervention to restore synchronization. Operational Downtime: Resolving these issues often involves manually syncing data or restoring backups. These recovery efforts can be time-consuming and disruptive, leading to extended downtime for applications and potential business impact. Recommendations for safe schema changes on Azure Database for MySQL flexible servers To minimize the risks of data loss and duplicate keys while making schema changes, follow these best practices: Set old_alter_table=ON to Default to COPY Algorithm Enable the server parameter old_alter_table system variable so that ALTER TABLE operations without a specified ALGORITHM default to using the COPY algorithm instead of INPLACE. This reduces the risk for users who do not explicitly specify the ALGORITHM in their commands. Learn more on how configure server parameters in Azure Database for MySQL. Avoid using ALGORITHM=INPLACE Do not explicitly use ALGORITHM=INPLACE for ALTER TABLE commands, as it increases the risk of data loss or duplicate keys. Back up your data before schema changes Always perform a full on-demand backup of your server before executing schema changes. This precaution ensures data recoverability in case of unexpected issues. Learn more on how to take full on-demand backups for your server. Avoid Concurrent DML during schema changes Schedule schema changes like ALTER TABLE and OPTIMIZE TABLE during application maintenance windows when no concurrent writes activities occur. This minimizes race conditions and synchronization conflicts. Use External Tools for Safer Online Schema Changes Consider using external tools like pt-online-schema-change to modify table definitions without blocking concurrent changes. These tools enable you to make schema changes with minimal impact on availability and performance. Learn more about pt-online-schema-change. Disclaimer: The pt-online-schema-change tool is not managed or supported by Microsoft; use it at your discretion. Mitigation plans To address these risks, we’re actively working to integrate the necessary fixes to ensure a more robust and reliable experience for our customers. New Servers Fully Secured by End of February 2025 All new Azure Database for MySQL Flexible Server instances created after 1st March 2025, will include the latest fixes, ensuring that schema changes are safeguarded against data loss and duplicate key risks. Rollout for Existing Servers For existing servers, we will roll out patches during upcoming maintenance windows by end of Q1 of Calendar Year 2025 We recommend monitoring your Azure portal for scheduled maintenance windows and Release notes for announcements about critical updates and patches. Priority updates available upon request If you require an urgent update outside of the scheduled maintenance windows, you can contact Azure Support. Provide the necessary server details and an appropriate maintenance window, and our team will work with you to prioritize the patching process. Note that priority patching will be available by February 2025. We recommend monitoring Release notes for announcements about critical updates and patches. Conclusion Safely managing schema changes on MySQL servers requires understanding the risks associated with online DDL operations, such as potential data loss and duplicate keys. To help safeguard data integrity and maintain server stability, implement best practices, for example enabling the COPY algorithm, using offline operations if feasible, or scheduling changes during low activity periods. Fixes are expected by the end of February 2025, and new Azure Database for MySQL flexible servers will be fully protected against these bugs. We will apply updates to existing servers during maintenance windows in Q1 2025. Following the recommendations above will help ensure that you can confidently make schema changes while preserving the reliability and performance of your server. View the full article
-
Special thanks to NChristis for reviewing this blog. To effectively monitor SAP threats, a clear strategy is essential. It's crucial to secure SAP systems and broaden our security perspective. Many organizations struggle to monitor their SAP environment and have little to no visibility on their SAP landscape. We need to correlate data across the enterprise to get a full view of potential threats. By analyzing various log sources, we can uncover hidden patterns and identify threats. Our solution meets these requirements with specialized detection mechanisms tailored to SAP's unique vulnerabilities. In this blog post, we'll help you get started by guiding you how you can test the Microsoft Sentinel solution for SAP , which rules to try out, what to focus on, and how to transition from a proof of value to production. This guide assumes you have already setup your connection between Sentinel and SAP, if you have not done that, go through the steps described in our documentation to connect your SAP systems to Sentinel. If you need assistance verifying if your connector is correctly connected to Azure and verify its health, you can also check the video here . Evaluation To evaluate the SAP for Sentinel Solution successfully, you need to consider the different people that are involved in using SAP. On the one hand, SAP systems contain many confidential information and critical business processes. This means that the data from SAP systems are important to be monitored by security operations, but also by business owners who have an interest in either compliance or health of the SAP systems. In the following sections we will describe the different components that are part of the SAP for Solution in the form of use cases such as “As a <role>, I want to achieve <goal>”. Stakeholder management and gathering Before we start, it’s important that we have the right stakeholders involved. Performing a proof of value with SAP systems requires preparation as these systems are usually critical to the business. We can divide our stakeholders into four large groups: Business/Executive sponsors: this group needs to provide approval for the proof of value and provide approval and budgets to move into production after a successful proof of value. SAP team: this team needs to be involved for various reasons, one being that they need to actively participate during the onboarding phase and configure SAP systems. They also need to be involved during the proof of value to show value and make clear what additional security they get from the SAP for Sentinel solution. Cloud Infrastructure Team: if you are setting up the connector, you will need some infrastructure to be set up such as a virtual machine, key vaults, etc. At Ignite we also announced a agentless way of connecting your SAP environments, so this team might be optional. Security: this team is involved in installing the solution, configuring it and interpreting the detected threats. All these four groups are important and should be involved. For example, performing a proof of value without having buy-in from executive sponsors will result in a dead-end where your efforts might not lead to any additional security or implementation if there is for example no budget allotted to procuring the solution. Roles and prerequisites For a list of the required roles and prerequisites that need to be set in place, please refer to this documentation page. Success Criteria This is a crucial step in your proof of value. Testing a solution without clearly defined success criteria will not yield any good results as you will not be able to conclude your proof of value and have a clear path to next steps. You might struggle to define success criteria, we will provide a few examples of success criteria, you can take these as-is, however we do suggest you adjust them to your specific business context: Solution allows SAP logs to be ingested and parsed into central location Solution can monitor different layers of SAP infrastructure Demonstrate successful detection and alerting predefined use cases, ensuring expected alerts for specific security incidents. This criterion is ideally adjusted to a specific use case you are trying to detect Solution allows to monitor against compliance frameworks such as NIST or SOX This criterion can be adjusted to monitor against specific frameworks or your own internal ones. Solution allows to be compliant with the European NIS2 regulations by being able to detect and report on incidents within our SAP systems within 24-72 hours Solution allows to create dashboards to keep track of SAP related security trends Solution allows to respond to incidents in SAP systems and automate response in SAP systems Solution allows to finetune detection to fit specific business requirements This list is by no means exhaustive, and we encourage you to look for specific criteria that are tailored to your business needs. This ensures that you can clearly articulate the need for a solution to monitor your SAP environment. Watchlists Watchlists in Microsoft Sentinel allow you to enrich data from a data source you provide with the events in your Microsoft Sentinel environment. For example, you might create a watchlist containing a list of high-value assets, terminated employees, or service accounts in your environment which would allow you to monitor unauthorized access to your SAP environment. In normal scenarios you can create your own watchlists, and while you still could do that for your SAP environments, we can rely on the SAP solution for Sentinel which comes packed with a lot of pre-filled watchlists that in turn will help to ensure you haven’t forgotten any important assets to cover in detections. These are also essential within the context of the SAP solution; they are can be used throughout the different components such as workbooks and analytics rules. Using watchlists also can help to ease the transition from proof of value to production as we will explain later. Let's start with our first use-case: As a SOC analyst, I want to be able to monitor all the sensitive tables in our organization This use case is not difficult to implement, but it demonstrates that protecting your SAP environment requires regular consultation with SAP business owners who are key stakeholders that need to be involved. So, before implementing this use case, make sure you ask your SAP business owners what tables are important for your enterprise. Fortunately, we can use on of the built-in watchlists of the Microsoft Sentinel solution for SAP to accomplish this use case. For this, navigate to the watchlist section of Sentinel by opening your Sentinel instance and expanding the Configuration section and clicking on Watchlist. If you already see a lot of watchlists, you can filter the watchlists by typing SAP into the search bar and adding a filter for the Source to be set to content hub. Scroll down and select the watchlist “SAP - Sensitive Tables”, afterwards click the update watchlist button which allows you to manually edit additional tables. Notice how the list already is prepopulated by some of the native SAP tables that are sensitive. Click on "New" and enter the table name that is not part of the prepopulated list of items and a description. Once you are satisfied with the changes, press save to finalize the changes. In the example below we are going to add a new table that contains HR payroll information. Now, whenever we are building new analytics rules or using the built-in ones that reference our watchlist for sensitive tables, our HR table is automatically considered and will trigger the rule as well. For an overview of the available prebuilt watchlists in the SAP for Sentinel, please refer to the public documentation . One watchlist that is provided by the Microsoft Sentinel solution for SAP is called "SAP – Systems". This watchlist contains an overview of your SAP systems, you should adjust these to add your production SAP IDs. This watchlist is also used throughout the analytics rules, we will cover these in the next section. We can use this watchlist to easily transition from a proof of value to production. During the proof of value we can add development or UAT SAP SIDs and mark their system role as production. This is much easier than having to adjust all the analytics rules to include Development or UAT systems. As shown in the screenshot above, we can write analytics rules (or create dashboards) that refer to a watchlist that during the proof of value can refer to test and development systems. Once we are ready to switch to production, we can easily adjust our watchlists to point to the production systems rather than having to adjust all our rules and dashboards. It is important to track these changes, once you switch to production, we need to make sure that these systems are assigned their proper system role. An easy way to do this is to use the "AdditionalData" field in the watchlist. In this section, we covered how watchlists are used in the Microsoft Sentinel for SAP solution and how you can use them to monitor sensitive SAP tables. In the next section we will cover how workbooks can be used for monitoring both threats and compliance. Workbooks Once your SAP systems have been connected, the SAP for Sentinel solution comes with multiple workbooks out of the box that can be useful to monitor multiple aspects of your SAP Solution. Workbooks can be used by different organizational roles to monitor and visualize different properties of connected SAP systems. While you can build your own workbooks from the ground up, the SAP for Sentinel solution comes with built-in ones which can be customized if needed. Which bring us to our first use case. An important note is that the workbooks for SAP in Sentinel only work if there is at least one incident in your workspace, it does not have to be an SAP specific incident. So, if you are testing this solution in a new Sentinel instance with no data, make sure there is at least one incident in your workspace, you can trigger an incident on an unrelated data source or on the Heartbeat table. This leads us to our second use-case: As an auditor, I want to confirm that our production SAP systems adhere to the SOX The SOX compliance framework is a set of regulations and best practices to ensure the accuracy and reliability of financial reporting in public companies. To accomplish this, we can use the built-in SAP audit controls workbook. Once the SAP for Sentinel solution has been installed, you can use this workbook. The compliance aspect of this workbook is tied to analytics rules that can be categorized into multiple frameworks. Analytics rules allow you to write queries that look at your SAP data and based on that create detection or metrics. The SAP solution for Sentinel comes with built-in analytics rules, it is also possible to create your own. To access this workbook, open the Sentinel instance that is connected to your SAP environment by navigating to the Sentinel blade, afterwards expand the Threat Management section and select workbooks. If your workbook is already configured, you will find it under the section My Workbooks, if not, they will be in the Templates section where you can save them to your workbooks. Opening the workbook from your My Workbook section then allows you to configure the workbook to your specific needs. There are two sections in the workbook, one is the filter section on top which for example allows you to filter for specific SAP systems e.g. production systems only or for a specific control framework e.g. SOX. Given we are looking into the SOX framework and want to only look at our Production SAP systems, change the filters so it reflects the right system roles and control framework. Again, here watchlists become crucial, by appointing Production roles to our test SAP systems, we can monitor their compliance in this dashboard. Below that section you can configure which rules are available for SAP systems (out of the box) and have not yet been enabled and you can select a rule to see how it is categorized with respect to compliance frameworks such as the SOX framework. For the SOX and NIST framework, the rules are already categorized for you. Although we are interested in the SOX framework, notice that this section also allows you to categorize an analytics rule against your own organization’s framework under "MyOrg Control ID", we could for example use our internal policy code IAM-001 within the Access controls family. Don’t forget to save your changes to persist them after changing these values. Once the rules have been adjusted to your needs, open the Monitor section of the workbook to see a visual representation of the rules that have been configured for SOX and incidents that have been triggered for those controls. Note that the data in the different widgets follow your selected filters on top. An overview of the SOX compliancy dashboard is also demonstrated in this video []. The second use case is centered around the security aspects of your SAP systems. Our second use-case, focuses around monitoring SAP systems for security threats: As a security analyst, I want to monitor for anomalous login attempts into our SAP systems. We could tackle this use case in multiple ways, one of which could be the use of Analytics Rules, analytics rules will be covered in the next blog post. The other way to do this could be by utilizing the built-in workbook The SAP solution for Sentinel. Open the "SAP -Security Audit log and Initial Access" workbook in the Workbooks section (you can find this under Threat Management in Sentinel). If you cannot find the workbook under my workbooks, look under the templates section of your workbooks and make sure the template is saved. This report has a familiar setup as the previous report, a filter section and a section with data widgets. Once you have adjusted the filters to your specific needs, there are two sections which you can use to monitor your SAP systems. The section “Logon analysis report” provides several visuals and tables. Scroll down until you find the “Logon Failures” section. In this section you will find a filter that allows you to filter anomalous logon attempts. Toggling this option allows you to look at these events. If during your proof of value you do not see anything in this section, that is normal as this workbook looks at data from the past 14 days and you might not have sufficient data yet in your Sentinel instance. Scroll down and you can view the events that are considered anomalous, selecting one of these records will surface additional incidents related to the user that have triggered this anomalous failed logon attempt. In this section, we covered how you can use the built-in workbooks to monitor your SAP environments for both compliance and threats. In the second part of this blog, we will cover how you can create analytics rule, do investigations and have a brief look at the SOAR capabilities. View the full article
-
This happens more often than I would like.So, quite a few servers, all RDP. Working just fine over the last few months. I have at least three (3) Hypervisors with RDP enabled so I can get to them if needed. All working fine over the last while.Then, windows updates, and I cannot connect to our servers or hosts anymore.I found that Microsoft reset all the network connections to public, or private and removed all the connections from "Domain"Of course that ended badly. Users cannot connect to the servers, I cannot log into the servers, I cannot connect to the HyperV hosts to reboot said maView the full article
-
Table of Contents Why Encrypt Azure Automation Variables? How It Works Prerequisites Where to get it How to Use Conclusion Why Encrypt Azure Automation Variables? Sensitive data stored in plaintext variables poses a significant security risk. Encrypted variables provide an added layer of protection, ensuring that even if unauthorized access occurs, your critical data remains safe. This script automates the conversion of non-encrypted variables into encrypted ones, reducing manual effort and ensuring consistency across your Automation environment. How It Works The script follows a straightforward yet effective approach: Retrieve Variables: Gathers all variables in a specified Azure Automation Account. Check Encryption: Identifies any variables that are not encrypted. Automate Encryption: Removes non-encrypted variables and recreates them with encryption enabled. Log Progress: Provides clear, detailed logs throughout the process for full transparency. Even after encryption, the way variables are called in your runbooks (using Get-AutomationVariable) remains the same. No modifications are required for your existing runbooks (unless you are using Get-AzAutomationVariable). Prerequisites Before running the script, ensure the following requirements are met: Azure PowerShell Module: Install the Az.Automation module by running the following command: Install-Module -Name Az.Automation -Force -AllowClobber Azure Permissions: The user (or Managed Identity) running the script must have sufficient permissions to read, delete, and recreate the Azure Automation variables. Authentication: Log in to Azure using Connect-AzAccount: Connect-AzAccount Where to get it Below is the PowerShell script you can use to automate this process (I have also attached the digitally signed script via the zip file): PowerShell script to convert non-encrypted variables in an Azure Automation Account to encrypted variables - GitHub How to Use Set Up the Script: Copy the script into your preferred PowerShell editor or upload it to an Azure Automation runbook. Provide Input Parameters: Specify the Azure Resource Group and Automation Account names as input parameters. Run the Script: Within Azure Automation: Trigger it as a runbook. Externally: Execute it from any environment with the Az.Automation module installed. Verify Results: Confirm all variables are encrypted by reviewing them in the Azure portal or using PowerShell (in the below example, replace the Resource Group Name and Automation Account Name with your own): Get-AzAutomationVariable -ResourceGroupName <resourcegroupname> -AutomationAccountName <automationaccountname> Conclusion By automating the encryption process, this script offers a simple, scalable, and secure way to protect sensitive data in Azure Automation. Even better, your existing runbooks require no changes—encrypted variables are accessed in exactly the same way as non-encrypted ones. Whether you manage a handful of variables or an enterprise-scale environment, this tool ensures your secrets are safeguarded with minimal effort. Don’t leave your sensitive data unprotected—run this script today and take a proactive step toward securing your Azure Automation variables. View the full article
-
With over 800,000 organizations relying on Microsoft Entra to navigate the ever-changing identity and network access threat landscape, it's crucial to have increased transparency about product updates, especially those requiring your action. Today, I'm excited to announce the general availability of "What's new in Microsoft Entra". This experience in the Microsoft Entra admin center provides a centralized view of our roadmap and change announcements across the Microsoft Entra identity and network access portfolio. In this article, I'll guide admins on how to make the most of this new feature to stay informed about Entra product updates and actionable insights. Discover what’s new in the Microsoft Entra admin center To ensure you have easy access to product updates, we've positioned the "What's new" at the top of the Microsoft Entra admin center navigation pane. Overview of what’s new functionality What's new This information hub provides a consolidated view of the Microsoft Entra roadmap and change announcements. It gives administrators a centralized location to track, learn, and plan for the releases and changes across the Microsoft Entra family of products. Highlights To make your life easier, the Highlights tab summarizes important product releases and impactful changes. From the Highlights tab, you can select an announcement or release to view its details and access links to documentation for more information. Roadmap The Roadmap tab lists the details of public preview and recent general availability releases in a sortable table. From the table, you can select a release to view the release Details which includes an overview and link to learn more. Change announcements The Change announcements tab lists the upcoming breaking changes, deprecations, retirements, UX changes and features becoming Microsoft-managed. You can customize your view according to your preferences, by sorting or by applying filters to prepare a change implementation plan. Check out the What’s new documentation to learn more. What’s next? We’ll continue to extend this transparency into Entra product updates and look forward to elevating your experience to new heights. We would love to hear your feedback on this new capability, as well as what would be most useful to you. Explore what's new in Microsoft Entra now. Best Regards, Shobhit Sahay Add to favorites: What’s new in Microsoft Entra Stay informed about Entra product updates and actionable insights with What’s new in Microsoft Entra. This new hub in the Microsoft Entra admin center offers you a centralized view of our roadmap and change announcements across the Microsoft Entra identity and network access portfolio. Learn more about Microsoft Entra Prevent identity attacks, ensure least privilege access, unify access controls, and improve the experience for users with comprehensive identity and network access solutions across on-premises and clouds. Microsoft Entra News and Insights | Microsoft Security Blog Microsoft Entra blog | Tech Community Microsoft Entra documentation | Microsoft Learn Microsoft Entra discussions | Microsoft Community View the full article
-
HI Everyone, Could you please let me know how could solve this problem without change the Calculation Option from Automatic to Manual. I made a new file, and I has no possible error in formulas because is empty, and reviewed the RAM from my Computer , but any changes seems to help. Any recommendations would be appreaciated regards View the full article
-
The Windows OS i am referring to in the title of this thread is Windows 11 Pro with all updates, latest version and build and the actions i am referring to in the title of this thread are: deleting malicious software, deactivate cracked software, deleting cracked software, or disable communication between the cracked software and the internet or vice-versa View the full article
-
The feature i am referring to in this thread´s title is language and keyboard layouts , the Windows OS i am referring in this thread´s title is Windows 11 Pro with latest version, build with all update and the aspects i am referring to in this thread´s title are: the relationships in the aspects of language and keyboard layouts between a Windows 11 Pro latest version, build with all updates that is going to be installed and has not started its installation and a previously installed Windows 11 Pro latest version, build with all updates that has cybersecurity vulnerabilities and malicious software View the full article
-
Can anyone help me with this Migration please. The tenancy has existed for some time and all users have had a business standard license to enable them to use Teams and Office 365 for some time. This means that mailboxes were created automatically but these have never been used. AD Sync was in place. I am trying to migrate from Exchange 2016 to 365. I have: Checked each user has an e-mail alias in Exchange @leonardgray.onmicrosoft.com. Configured Outlook Anywhere on my on-premises Exchange Server. Enabled MRS Proxy on your on-premises Exchange Server. Use the Microsoft Exchange Remote Connectivity Analyzer to test your connection settings. Used the Outlook Anywhere (RPC over HTTP) or Outlook Autodiscover tests - it is showing no issues. Checked the permissions on the account I'm using to migrate (domain admin and has FullAccess and assigned WriteProperty permissions). We're not on Exchange server 2007 so no need to worry about unified messaging. The domains are verified. The users are created licensed. I have the CSV with the list of users. I've created a variety of migration endpoints and it verified as it saved and saved successfully.I initially tried to use a cutover migration as it is what I have used successfully in the past. This failed, when I researched the error the articles suggested that it was because AD sync was in place. So I turned off AD sync. The cutover migration continued to fail. The articles suggested that a Staged migration was a better option for this scenario. This failed with this error: Error: MigrationTransientException: Failed to update the on-premises mailbox with the target address (SMTP:email address removed for privacy reasons). The mail sent to the user will not reach user's hosted mailbox. The error might be due to account credentials used for migration not having enough permission to write data back into the on-premise AD. Please ensure that the account credential has Domain Admin privilege. --] We weren't able to connect to the remote server. Please verify that the migration endpoint settings are correct and your certificate is valid, and then try again. Consider using the Exchange Remote Connectivity Analyzer (https://testexchangeconnectivity.com) to diagnose the connectivity issues. I did some more reading and found an article that suggested that Staged migrations can only be used from Server 2003 and 2007. So I decided to try a Remote Migration. This one failed with the error: Error: TargetUserAlreadyHasPrimaryMailboxException: Target user '902b455c-5986-47c7-9988-bb7282df52b5' already has a primary mailbox. The articles about this suggest I need to delete the data in homeMDB and homeMTA. The data in homeMDB looks like a pretty important link between Exchange on prem and Active Directory. I'm not particularly comfortable with just deleting this. Can anyone tell me what I'm missing? View the full article
-
I'm designing the config for some terminal servers running Server 2025.I want to pin specific icons to the start menu. In Server 2022 (or Windows 10), this was simply a process of setting up the reference machine how I wanted it, thenExport-StartLayout -Path "C:\Export\MStartMenuLayout.xml" to generate the config file, whcih was then applied using the GPO Computer Configuration\Policies\Administrative Templates\Start Menu and Taskbar\Start Layout - this worked fine.On Server 2025 (and Windows 11), however, it appears that this doesn't work the same any more. Although the export command works, View the full article
-
I need to create a 24 hour service that starts at 6 am, and ends at 6 am 24 hours later - that is, a service that extends over the turn of the day, and starts at a specified time every day. I have played around with every setting I could find, including staff's work hours, but nothing seems to work. Is this at all possible? If not, what could be the best workaround? View the full article
-
I am posting here because I have not received a response to my support request despite my plan stating that I should hear back within 8 hours. It has now gone a day beyond that limit, and I am still waiting for assistance with this urgent matter. This issue is critical for my operations, and the delay is unacceptable. The ticket/reference number for my original support request was 2410100040000309. And I have created a brand new service request with ID 2412160040010160. I need this addressed immediately. View the full article
-
I have a problem with Intune sync, suddenly each device I enroll to Intune show up with not data and windows info, application, serial number, even the device action demits and each time I restart the device sync done but after that stopped with multiple errors, some of application and policy installed and the remaining policies not successfully to be applied, including but not limited to Device status Not evaluated and not compliant. and there is no change in the configuration. View the full article
-
I've run the memory diagnostic tool several times, but no results are shown in the notification area or the Event viewer. I can see in the Event Viewer that the tool was scheduled. Suggestions on how to get the results? Thanks, View the full article
-
AzureAD joined device via PPKG didn't enroll in Intune | Microsoft Community Hub → an old reference I seem to have the same problem. So before the tipp comes up. Yes I configured the MDM scope. The User I created the token in the wcd with is in there. The most funny thing is, It worked before up untill end of november, everything went fine. I had to do some scripting around the bulk joining but those problems are solved. So all of a sudden I stopped working. No the tokkens I used are still valid, and I created new ones. For several departments I do multiple ppkg in different subfolders. I let them run through powershell. So no errors, when dthe device restarts, no Intunejoin but why? In the errorlogs (if I looked in the correct one) there are errors with no substance, like unknown error 0x00... Any leads? Was there an update in any form on MS side? anything? Just to be sure I made the mdm scope all, as you can see in the screenshot. So 2 days no progress now I'm here. View the full article
-
The team has been hard at work innovating and improving Viva Learning. As we approach the end of 2024, below is a roundup of recently released features and look ahead to 2025. A couple highlights you might have seen: We recently released the much anticipated completion record bulk export functionality Earlier this year we made it possible for Microsoft 365 Copilot license holders to access Microsoft Copilot Academy. However, there are a few features the team has recently completed you may have missed, as well as upcoming features right around the corner you likely haven’t heard about yet! Admin self-serve improvements for SAP SuccessFactors (generally available now) Integrating SAP SuccessFactors with Viva Learning is easier than ever with the new streamlined configuration process. This self-serve process takes place entirely in the Viva Learning admin tab, rather than splitting operations between Viva Learning and SAP surfaces. Many of the configuration steps have been automated, leading to a decrease in manual-entry errors, less reliance on help desk intervention, and quicker setup times. Add external content links to Viva Learning tabs and featured sets (generally available now) External content from providers like YouTube, Vimeo, and Stream can now be included in two new areas within Viva Learning – featured sets and learning tabs. This functionality requires a Viva Learning premium or Viva suite license to access. Featured sets are curated sets of courses that an organization can choose to display prominently to the entire organization. Featured sets previously leveraged content solely from SharePoint, connected learning management systems (LMS), and third-party providers. Admins can now include external URLs as part of these sets. Learning tabs allow users to find, curate, and share content in Teams channels or chats. The content available for learning tabs can include any content available to the organization, and now includes the option to insert links to external content providers. Adding content to learning tabs now features an option to add linked content, shown in the box on the right side below “add content to your tab” New sort options in Viva Learning search (generally available now) Viva Learning search functionality continues to improve, providing learners with relevant results and sorting options to expedite content discovery. The latest improvement is the newly launched dropdown menu. Now, users can sort search results in Viva Learning by relevance, number of views, or ratings. These sorting options require a Viva Learning premium or Viva Suite license to access. A dropdown menu displayed at the top right of a search result in Viva Learning now shows the following sorting options for Premium Viva Learning or Viva Suite customers: relevance, highest rating, most viewed. App bar integration (generally available now) The Viva Learning webapp saw navigation improvements through the implementation of the Viva app bar that takes users directly to various Viva modules based on their license. This simplifies access to other Viva modules and provides a consistent navigation experience. The Viva app bar is shown in the webapp along the lefthand side of the image. This helps users quickly navigate to other Viva modules they use. More features coming soon The following features are scheduled to release in early 2025. Error framework messages: These descriptive, actionable messages help admins work through failures in the sync or setup process without needing to raise a support ticket from the Viva Learning support team. Descriptive error messages shown for a sample Workday integration. LMS sync logs: Logs featuring granular details of LMS data ingested in Viva Learning will help admin troubleshoot data-related issues like missing assignments or completion data. Admin experience for accessing logs in the “Manage providers” tab. User mapping: Admins will be able to add and edit the user mapping for LMS integrations directly from the admin experience in Viva Learning. This feature will be backward compatible for existing customers. User mapping can be done from the admin tab in Viva Learning. Image shown for Workday user mapping. On-demand sync: This feature will allow admins to manually trigger sync for a specific period. This gives admins more control to troubleshoot and resolve synchronization errors quickly. On demand sync feature in the admin tab of Viva Learning View the full article
-
Today, we are excited to announce that the Public Preview of the new Message Trace in the Exchange admin center (EAC) in Exchange Online will begin rolling out mid-December and is expected to be completed by the end of December 2024. Admins will soon be able to access the new Message Trace and its capabilities by default when navigating to the Exchange admin center > Mail flow > Message Trace. As illustrated in the image below, the new Message Trace will be toggled “on” by default once the change has been deployed to your tenant. If you wish to disable the preview, you can do so by toggling this setting to “off.” Key UI functionality changes Extended Query Range: You can now query up to 90 days of historical data for near real-time queries. However, please note that you can only query 10 days’ worth of data at a time. Please note that you will initially only have 30 days of historical data for near real-time query, and this will build over time to 90 days of historical data. Subject Filter: The subject filter for Message Trace queries is now available, supporting "starts with", "ends with", and "contains" functions. This filter also supports special characters. Delivery Status Filter: The delivery status filter will now support searches for "Quarantined", "Filtered as spam", and "Getting status" statuses. Additional UI updates based on feedback Customizable Columns: For your search results, we’ve introduced customizable columns and added additional column options that you can select from. Please refer to the image below for the new columns that have been added. Persistent Column Widths: You will be able to customize your column-widths, and these changes will be sticky per logged-on admin account, so they will not have to be reset every time you run a new message trace query. Wider Flyout Option: An option for a wider flyout for the Message Trace detail is now available. Time Zone Consistency: Message Trace will now default to the time zone set in the Exchange account settings of the logged-on admin. Key cmdlet changes from Get-MessageTrace Extended Query Range: Ability to query up to 90 days of historical data. However, please note that you will only be able to query 10 days’ worth of data per query. Please note that you will initially only have 30 days of historical data for near real-time query, and this will build over time to 90 days of historical data. Subject Parameter: The addition of a subject parameter allowing for more specific Message Trace queries. No Page number or Page size parameter: There will not be pagination support in the new Message Trace cmdlet. Result size parameter: The new Message Trace will support a default value of 1000 results and a maximum of 5000 results (set via the -ResultSize parameter), which is a significant increase. This change is to ensure fair use of our resources, as pagination can create performance issues for our system. StartingRecipientAddress parameter: This parameter’s main use is to assist in pulling subsequent data while minimizing duplication. Since pagination will no longer be supported, you can utilize the EndTime parameter with the EndTime of the last record of the query results and fill in the StartingRecipientAddresss parameter with RecipientAddress of the last record of the previous result. See the example below for more details. Example of differences between V1 and V2: For the sample data above, you can pull the first 10 records by either query: Old Message Trace: Get-MessageTrace -StartTime '2024-11-01T00:00:00Z' -EndTime '2024-11-01T00:10:00Z' -Page 1 -PageSize 10 New Message Trace: Get-MessageTraceV2 -StartTime '2024-11-01T00:00:00Z' -EndTime '2024-11-0100:10:002' - ResultSize 10 To pull the next subsequent records, you could use either of the following queries. In the V2 example below, we are using the StartingRecipientAddress of last recipient (r_1_010@contoso) from the previous results. Old Message Trace: Get-MessageTrace -StartTime '2024-11-01T00:00:00Z' -EndTime '2024-11-01T00:10:00Z' -Page 2 -PageSize 10 New Message Trace: Get-MessageTracev2 -StartTime '2024-11-01T00:00:00Z' -EndTime '2024-11-01T00:10:00Z' -ResultSize 10 -StartingRecipientAddress r_1_010@contoso.com Known differences and on-going updates Message trace V1 normalizes all recipients to lowercase, while V2 keeps them the same as in Message Tracking Logs. When displaying the results, V2 will order by ReceivedTime in descending order first, then RecipientAddress in ascending order(case-insensitive). In some rare cases, FromIp may be missing in V2, but we are working to fix this issue. For messages with over 1000 recipients, admins must include the MessageTraceId in both the EAC and PowerShell cmdlet queries to avoid partial results. For quarantine scenarios, V2 will display the latest status while V1 displayed the original status. So if the email is quarantined initially and then released by the administrator later, Message Trace v2 will show the latest status which is delivered to Mailbox. Microsoft 365 Messaging Team View the full article
-
Hello, I have a question regarding in-scope SKUs of the new pricing update that takes effect 4/1/2025 for monthly billing/annual term subscriptions. (Specifically, Dynamics 365 SKUs in the CSP NCE pricelist) In the announcements and FAQs, in some parts the term "user license" is used, and in others it seems to apply to all SKUs with an annual term/monthly billing option. Would you be able to tell me if the 5% increase applies to the former or latter? Thank you in advance. View the full article
-
Hello. Maybe you can help me, to find something out. First how the problem started: We registered for non-profit in november this year and got checked from techsoup, which was no problem. We transfered all mailboxes and used 6 licences. But one notebook was not able to login. So I opened a ticket with the problem. I got some mails with "we are working on it" etc... and the last mail was "your service was cancelled with enddate of actual period". No chance to get a reason why (only because of a result of internal processes). Login was not possible anymore. So I started to transfer again. This time to G.... No problem Meanwhile last week I got a "tell us uf your experience" from Microsoft and gave 1 star was a bad comment. Today I got an newsletter, clicked on it and could login to non profit?? Where can I see, it my non-profit is now cancelled or not? I can't find any enddate.. View the full article
-
In the rapidly evolving digital workspace, Microsoft has consistently been at the forefront of innovation, and their latest offering — Teams Immersive Spaces powered by Mesh — is a testament to this leadership. Designed to redefine how teams collaborate in a hybrid work environment, these 3D spaces provide a groundbreaking approach to communication and connection. Here, we delve into what makes these features unique and how they are reshaping the future of collaboration. https://dellenny.com/microsoft-teams-immersive-space-3d-and-mesh-app/ View the full article