Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5719
  • Joined

  • Last visited

Everything posted by Windows Server

  1. Hello team, I am not receiving notification from the MCT Program Announcements board within MCT Lounge. I have created a ticket within MCT Forum, and I was guided by the forum moderator to open up a discussion here. Below is the ongoing post which I am currently getting the support from MCT Forum: [재문의] MCT Lounge에 특정 게시판을 Follow해도 새로운 게시물에 대한 알림이 이메일로 오질 않습니다. - 교육, 인증 및 프로그램 지원 View the full article
  2. We are announcing the general availability of Managed Instance link feature with SQL Server 2017, which enables near-real time data replication from SQL Server to Azure SQL Managed Instance. Link feature is now supported in all SQL Server versions in the mainstream and extended support, from SQL Server 2016 to SQL Server 2022. To use Managed Instance link feature SQL Server 2017, customers need to install “Azure Connect Pack for SQL Server 2017”. We recommend the latest version of SQL Server Management Studio to create and manage links with SQL Server 2017. To learn more about SQL Server – SQL Managed Instance hybrid capabilities which link feature unlocks, see the feature documentation page. View the full article
  3. Anyone has been successful on this job? This will be pushed automatically to managed workstations via PowerShell / cmd scripts. So, the goal is to trigger upgrade work PCs during business hours (silent and background process) but to the extent of scheduling its reboot after business hours. I have tested a number of ways to do this with Windows11upgradeassistant and the windows 11 ISO setup disc. But will all end up rebooting the PC's without user intervention and in the middle of work. Unsaved documents lost and everything. View the full article
  4. Very old laptop I got here and I'm confusing if It can run W10 or do I stick with 7. That's not for my personal use, it's for my aunt. W11 has all security updates and moderns things she might want. I'm willing to upgrade RAM (DDR3, but still) and a SSD. View the full article
  5. Hi all, newbie here (have been sitting on the sidelines watching and learning for some time tho!) please excuse my error if this appears in the wrong thread. I have been attempting to install version 22631.2506 via ISO install to my notebook. Previous attempts to by pass the MS Account requirement have not been an issue. Version 23H2 appears that MS have used the anti and blocked all possible options including Rufus & NT Lite versions. I am hoping to be proven wrong, although with my upwards of 50 attempts to by pass MS Account have been unsuccessful. When attempting to bypass connecting to the net, or bypass NRO via CMD the system returns you back the start of the on-boarding process in a continuous loop! View the full article
  6. A few weeks ago they announced they would start shipping 26120 builds to both Dev and Beta and that the window to go to the beta Channel would ”close soon”, effectively urging those in the Dev Channel who wanted to switch to do so ASAP. Turns out that in MSFT terms “Soon” is not really Soon and actually means at least 2 more weeks to a month of waiting (or more). I wonder when the Dev Channel will start getting 27xxx builds… View the full article
  7. So this morning I made the mistake of turning on "get updates before they are out early" well windows 24H2 started downloading and installed. So I was waiting for the usual blue screen. It installed and no blue screen, I did have to reinstall a couple small programs to get get them to work. The create a restore point however didn't work, even going to the create the restore point using windows settings didn't seem to work either. So OK I decided to restart. That's when it all went south. Dell fired up the system hardware check first, I never get that, after it was done it said no errors found. (BTW I also ran sfc and disk before restarting, that also said no issues with the system.) so now I'm back to doing a full system backup recovery if 23h2. The question is, what could be the problem here? If 24H2 installed OK, and wasn't giving me any errors what would make it not boot as usual? View the full article
  8. Why am I seing this message in the bottom corner of my computer. I have been using windows 11 since it came out? Can't think it is a scam. I typed in activate windows on the search bar and got this... View the full article
  9. I'm running Windows 11 Enterprise, 23H2, build 22631.4751. Unfortunately, it seems that Windows has downloaded 24H2 on my computer. I have managed to delay installation of this update using Pause updates in Settings > Windows 11. Is there any way to prevent this update from being applied now that Windows has downloaded it? Judging from answers to my post from a month ago, it seems the answer is no, unfortunately. Does installing 24H2 reset customizations in Windows? When I first started using Windows 11, I spent a good deal of time setting customization options in Settings, Control Panel, and the like. Many of these customizations are particularly important for me because I have a physical disability and have limited mobility. Will customizations be lost when 24H2 is installed? This post by @dacrone seems to suggest that the answer is yes, unfortunately. View the full article
  10. With Windows 11 they screwed up the Program further by introducing Controlled Feature Rollouts. They introduced also a “get new updates first” toggle in WU which, thanks to CFR, is partially pointless as well. Because turning that toggle on doesnt mean you will get the new stuff to test right away. And the final nail in the coffin is the fact that some times new features and changes are shipped to consumers before they are shipped to Insiders in any of the 3 channels. I predict some time soon in the near future MSft is going to end the Insider Program. View the full article
  11. Colored folder icons doesn’t seem like a new feature that should appear in an email client that’s been around for a long time, but the new Outlook for Windows and OWA now both offer users the ability to choose different colors for folder icons. Apparently, this is an important step forward in the development of the new Outlook and might just be the killer feature to convince the curmudgeons who use Outlook classic to switch. https://office365itpros.com/2025/03/07/colored-folder-icons/ View the full article
  12. I wanted to do a clean install on my pc with only a change of hard drive. For details see my thread in the antivirus and security listings. Everything went well until Windows refused to pass the hardware requirements. I had downloaded an ISO from the official Microsoft website and burned it to DVD. The hardware has been approved until now with an Athlon 3000G in an Asrock A320M board, (see my computers listed). What is the solution? Have the minimum requirements been raised? View the full article
  13. For safety I have just made a recovery USB for my Win 11 installation. I am trying to test it to see that it boots up OK but I have a simple question about using it. All the files seem to have been created on the stick (pic 1). That looks OK ??. Pic 2 shows the blue screen I get when booting up and using F9 in my case to interrupt boot up. Out of the options the only ones that work are the Windows boot manager (Team) and the UEFI Generic. The Windows boot manager (Team) seems to boot me up in the normal way straight into my normal screen. The UEFI Generic option gets me into the start of a "reinstallation" procedure. 1. Is the UEFI Generic actually the USB stick as I thought that would show up as USB or something? 2. If I proceeded to boot from UEFI Generic option would that lead me into a complete reinstallation and loss of all files and data ??? I used to use Aomfei recovery stick which just booted up the system but I accidentally erased it and the new version of Aofmei does not offer "Create a bootable USB" as an option in the free version. So, does anyone know a similar free program that does allow creating a bootable USB 3. Is there a way of making a W 11 recovery stick that avoids losing all my data and just installs the OS ?? Thanks in advance for any advice you can give. View the full article
  14. I recently swiped a super chill music compilation on YouTube that I particularly liked and wanted to download the audio and put it on my phone to listen to whenever I wanted. However, the video can only be played online, and after searching for half a day, I couldn't find a particularly smooth way. Originally, I thought of directly recording the screen and then transferring the audio, but the sound quality is too general, and there is a lot of background noise. I also tried a few methods I found on the internet, but either the speed is limited or the converted sound quality is not good, and some of them even can't find the download button. 😂 Does anyone know of a stable way to extract audio from YouTube video on Windows 11? I'd like to convert it to MP3 or other common formats so I can listen to it whenever I want. Anyone who has used a reliable tool, please recommend! Thank you! Thank you! View the full article
  15. This Article is Authored By Michael Olschimke, co-founder and CEO at Scalefree International GmbH and Co-authored with Tim Kirschke Senior BI Consultant from Scalefree The Technical Review is done by Ian Clarke and Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft Introduction In this series' previous blog articles, we created a Raw Data Vault to store our raw data. In addition to capturing and integrating the data, we applied examples of soft business rules inside the Business Vault. This article focuses on using the data from the combined Data Vault model (that is, both the Raw Data Vault and Business Vault) and transforming it into valuable information to provide to business users. Information Delivery The Raw Data Vault and Business Vault capture the raw data from the source systems and the results from the business logic required by the business users. One could argue that the job is done by then. But in reality, end-users typically don’t want to work with the Raw Data Vault or Business Vault entities. The valid reasons for that often include a lack of knowledge in Data Vault modeling and, hence, unclarity about how to query a Data Vault implementation. Additionally, most end-users are already familiar with different data consumption methods. This typically includes dimensional models, such as star schema or snowflake schema, or fully denormalized flat-and-wide tables. In this article, we discuss fact entities and slowly changing dimensions (SCD) 1 and 2. Also, most tools for delivering information, such as dashboarding tools like Microsoft PowerBI or SQL Server Analysis Services to produce OLAP cubes, are easy to use with such models. How to Deliver Information with Data Vault 2.0 Regardless of the desired information delivery format, it can be directly queried out of Raw Data Vault entities. The Data Vault model follows an optimized schema-on-read design where the raw data is stored as-is and transformations, such as business logic and structural changes, are applied during query time. This is true, except that the incoming source data is broken down into the fundamental components: business keys, relationships, and descriptive data. This is the optimization of the storage and it makes the application of business rules much easier and also the transformation into any desired target information schema. Business Vault entities are used during information delivery to apply business rules. In most cases, the raw data is insufficient for reporting: it contains erroneous data, and some data is missing or needs to be converted from one currency to another. However, some of the raw data is good enough for reporting. Therefore, in many cases, information models, such as a dimensional model, would be derived from both the Raw Data Vault and the Business Vault by joining the required entities. Information Delivery requirements typically include a historization requirement. A Slowly Changing Dimension (SCD) Type 1 would only include the current state of descriptive attributes. However, SCD Type 2 would consist of the full history of descriptive attributes. Data Vault follows a multi-temporal approach and leverages multiple timelines to implement such solutions: The load date timestamp is the technical timeline that indicates when data arrived at the data platform. The timeline must be defined (and controlled) by the data platform team. The snapshot timestamp indicates when information should be delivered to the end user. This timeline is regular (e.g., every morning at 8 a.m.) and defined by the business user. Business timelines are inside the source data and indicate when something happened. Examples include birth dates, valid from and to dates, change dates, and deletion dates. Separating these timelines and creating multi-temporal solutions, where some data is back-dated or post-dated, becomes much more straightforward. However, this is beyond the scope of this article. Implementation Walk Through To fulfill the business requirements, let's start as simple as possible. For various reasons, it’s highly recommended that information marts be implemented using SQL views initially and only use physical tables if performance or processing times/costs require it. Other options like PIT and bridge tables typically provide a sufficient (virtualized) solution. We follow this recommendation in this article and start with a dimension view and a fact view. Store Dimension Many dimensions entities are derived from a hub and its satellite. If no business rules are implemented, the Dimension can access directly from the Raw Data Vault entities. For example, the following CREATE VIEW statement implements a SCD Type 1 store dimension: CREATE VIEW InformationMarts.DIM_STORE_SCD1 AS SELECT hub.store_hashkey as StoreKey, hub.store_id as StoreID, sat.address_street as AddressStreet, sat.postal_code as PostalCode, sat.country as Country FROM DV.store_hub hub LEFT JOIN DV.store_address_crm_lroc_sat sat ON hub.hk_store_hub = sat.hk_store_hub WHERE sat.is_current = 1 This simple query accesses the store_hub and joins it to the store_address satellite. It selects the business key from the hub because typical business users want to include it in the dimension. In addition, it renames all descriptive attributes from the satellite to make them more readable. The hash key is added for efficient joins from Fact entities. In the end, a WHERE clause leverages the is_current flag in the satellite to only include the latest descriptive data. This flag is calculated in a view on top of the actual satellite table. Thus, the view is joined, not the table. Only this specific WHERE clause makes this dimension of SCD type 1. Leaving it away would automatically lead to an SCD type 2! However, in such a case, it would make sense to include the load_date and load_end_date of the satellite view additionally. Transaction Fact The following CREATE VIEW statement implements a fact entity. In this simple example, no aggregations are defined. The granularity of the derived fact entity matches the underlying data from the non-historized link. Therefore, the fact view can be directly derived from the non-historized link without the requirement for a grain shift, e.g., a GROUP BY clause: CREATE VIEW InformationMarts.FACT_STORE_TRANSACTIONS AS SELECT nl.transaction_id as TransactionID, s_hub.store_hashkey as StoreKey, c_hub.customer_hashkey as CustomerKey, nl.transaction_date as TransactionDate, nl.amount as Amount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.store_hub s_hub ON nl.hk_store_hub = s_hub.hk_store_hub LEFT JOIN DV.customer_hub c_hub ON nl.hk_customer_hub = c_hub.hk_customer_hub This query selects from the non-historized link and joins both hubs via the hashkeys. From these hubs, the hash keys are assigned. From the non-historized link, the relevant transaction details are chosen. A filter for historization is not required because both hubs and non-historized links only capture non-changing data. Capturing changing facts, which in theory should never happen but might happen in reality, is also possible using non-historized links but beyond the scope of this article. Pre-Calculated Aggregations In most business environments, BI developers would now connect their reporting tool of choice against our provided dimensional model to create custom reports. It's common to aggregate data to calculate sums, counts, averages, or other aggregated values, especially for fact data. Depending on the data volume, the reporting tool, and the aggregation complexity, this might be a challenge for business users. To simplify usage and optimize query performance in some cases, a pre-aggregation in the dimensional layer might be the best choice. For example, the CREATE VIEW statement implements another store transaction fact view that already includes the requested aggregations. Since aggregations are always based on a GROUP BY clause, the following views implement both grain shifts to calculate the number and amount of transactions on the different dimensions of store and customer: CREATE VIEW InformationMarts.FACT_AGG_STORE_TRANSACTIONS AS SELECT s_hub.store_hashkey as StoreKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.store_hub s_hub ON nl.hk_store_hub = s_hub.hk_store_hub GROUP BY s_hub.store_hashkey CREATE VIEW InformationMarts.FACT_AGG_CUSTOMER_TRANSACTIONS AS SELECT c_hub.customer_hashkey as CustomerKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.customer_hub c_hub ON nl.hk_customer_hub = c_hub.hk_customer_hub GROUP BY c_hub.customer_hashkey In both these queries, only one hub is required. The hash key of each hub is used for the GROUP BY clause, and three basic aggregations are applied to determine the count of transactions and calculate the sum and average amount of transactions. While this reduces the workload on the business user side, this implementation might still be slow or produce high processing costs. So, it would make sense to start materializing this aggregated fact entity or to introduce a bridge table. A bridge table is similar to a pre-aggregated fact table in dimensional models. However, it is much more customizable as it only implements the grain shift operation (in this case, the GROUP BY clause), measure calculations, and timelines. It also contains the hub references, which will be turned into dimension references, as seen in the previous examples. The definition of the bridge table is provided in the following statement: CREATE TABLE [DV].[CUSTOMER_TRANSACTIONS_BB] ( SnapshotDate DATETIME2(7) NOT NULL, CustomerKey CHAR(32) NOT NULL, TransactionCount BIGINT NOT NULL, AverageAmount MONEY NOT NULL ); The code to load a bridge table is similar to the fact view: INSERT INTO [DV].[CUSTOMER_TRANSACTIONS_BB] SELECT SYSDATETIME() as SnapshotDate nl.customer_hashkey as CustomerKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl GROUP BY nl.customer_hashkey; The bridge table might also contain complex business calculations in many other cases. Still, the focus is on the grain shift operation, which takes a reasonable amount of time on many traditional database systems due to their row-based storage. However, Microsoft Fabric uses a different storage format optimized for aggregations but typically at the price of joins. The bridge table aims to improve the query performance of fact entities. In turn, that means it is ok to pre-join other data into the bridge table if the join performance is insufficient. A common requirement is the addition of a time dimension. Snapshot-Based Information Delivery So far, the store dimension presented in this article was an SCD Type 1 dimension - a dimension without history. However, in many cases, businesses want to relate facts to the dimension’s member version of the time the fact occurred. For example, an order was issued before the customer relocated to another state. In a Type 1 scenario, the order’s revenue would be associated with the customer's current state. However, this might not be correct, depending on the information requirements. In such cases, the revenue should be associated with the customer's state at the time of the transaction. This information requirement demands an SCD Type 2 dimension with history. Point-in-time (PIT) tables are recommended to produce such dimensions efficiently. This section discusses the necessary steps to create such a table. A good starting point is a date table. This table is a reference table for dates and can produce a date dimension and populate the PIT table. The following statement creates the table and initializes it with dates between 1970 and 2099: CREATE SCHEMA CONTROL; CREATE TABLE CONTROL.Ref_Date_v0 (snapshot_datetime datetime2(6), snapshot_date date, year int, month int, quarter int, week int, day int, day_of_year int, week_day int, beginning_of_year bit, beginning_of_quarter bit, beginning_of_month bit, beginning_of_week bit, end_of_year bit, enf_of_quarter bit, end_of_month bit, end_of_week bit) WITH date_base AS ( SELECT n FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) v(n) ), date_basic as ( SELECT TOP (DATEDIFF(DAY, '1970-01-01', '2099-12-31') + 1) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1 AS rn FROM date_base ones, date_base tens, date_base hundreds, date_base thousands ORDER BY 1 ), snapshot_base AS ( select cast(dateadd(day, rn - 1, '2020-01-01 07:00:00') as datetime2) as snapshot_datetime, cast(dateadd(day, rn - 1, '2020-01-01') as date) as snapshot_date from date_basic ), snapshot_extended AS ( SELECT snapshot_datetime, snapshot_date, DATEPART(YEAR, snapshot_date) as year, DATEPART(MONTH, snapshot_date) as month, DATEPART(QUARTER, snapshot_date) as quarter, DATEPART(WEEK, snapshot_date) as week, DATEPART(DAY, snapshot_date) as day, DATEPART(DAYOFYEAR, snapshot_date) as day_of_year, DATEPART(WEEKDAY, snapshot_date) as week_day FROM snapshot_base ) INSERT INTO CONTROL.Ref_Date_v0 SELECT *, CASE WHEN day_of_year = 1 THEN 1 ELSE 0 END as beginning_of_year, CASE WHEN day = 1 AND month in (1, 4, 7, 10) THEN 1 ELSE 0 END as beginning_of_quarter, CASE WHEN day = 1 THEN 1 ELSE 0 END as beginning_of_month, CASE WHEN week_day = 2 THEN 1 ELSE 0 END as beginning_of_week, CASE WHEN snapshot_date = EOMONTH(snapshot_date) AND month = 12 THEN 1 ELSE 0 END as end_of_year, CASE WHEN snapshot_date = EOMONTH(snapshot_date) AND month in (3, 6, 9, 12) THEN 1 ELSE 0 END as end_of_quarter, CASE WHEN snapshot_date = EOMONTH(snapshot_date) THEN 1 ELSE 0 END as end_of_month, CASE WHEN week_day = 1 THEN 1 ELSE 0 END as end_of_week FROM snapshot_extended The first part is a simple DDL statement that creates the reference date table. This is followed by an INSERT statement that leverages multiple Common Table Expressions (CTEs) to simplify the logic. The first CTE date_base simply generates a list of the numbers 1 to 10, followed by the CTE date_basic, which CROSS JOINs the previous CTE four times, creating 10 * 10 * 10 * 10 = 10000 rows. A ROW_NUMBER() transforms the numbers into an ascending number ranging from 1 to 10000. The next CTE snapshot_base uses this ascending number to execute a DATEADD() function on top of a specified start date, '2020-01-01 07:00:00', to generate a list of daily dates. This is done once in datatype datetime2 and once in datatype date. The last CTE snapshot_extended adds metadata like MONTH, YEAR, etc. Lastly, boolean columns, which mark the beginning and end of weeks, months, quarters, and years., are added. Everything is then inserted into the reference date table. This reference date table can now create and load a Point-In-Time (PIT) Table. The PIT table precalculates for each snapshot date timestamp (SDTS), which satellite entry is valid for each business key. The granularity of a PIT is (number of snapshots) * (number of business keys) = row count in PIT. The following code creates and populates a simple PIT example for stores: CREATE TABLE DV.STORE_BP (hk_d_store CHAR(32) NOT NULL, hk_store_hub CHAR(32) NOT NULL, snapshot_datetime datetime2(6) NOT NULL, hk_store_address_crm_lroc_sat CHAR(32) NULL, load_datetime_store_address_crm_lroc_sat datetime2(6) NULL); WITH pit_entries AS ( SELECT CONVERT(CHAR(32), HASHBYTES('MD5', CONCAT(hub.hk_store_hub, '||', date.snapshot_datetime)), 2) as hk_d_store, hub.hk_store_hub, date.snapshot_datetime, COALESCE(sat1.hk_store_hub, '00000000000000000000000000000000') as hk_store_address_crm_lroc_sat, COALESCE(sat1.load_datetime, CONVERT(DATETIME, '1900-01-01T00:00:00', 126)) as load_datetime_store_address_crm_lroc_sat FROM DV.store_hub hub INNER JOIN CONTROL.Ref_Date_v0 date ON hub.load_datetime <= date.snapshot_datetime LEFT JOIN DV.store_address_crm_lroc_sat sat1 ON hub.hk_store_hub = sat1.hk_store_hub AND date.snapshot_datetime BETWEEN sat1.load_datetime and sat1.load_end_datetime ) INSERT INTO DV.STORE_BP SELECT * FROM pit_entries new WHERE NOT EXISTS (SELECT 1 FROM DV.STORE_BP pit WHERE pit.hk_d_store = new.hk_d_store) The one CTE pit_entries defines the whole set of PIT entries. The store hub is joined against the snapshot table only when the hub appears before the SDTS to reduce the number of rows. But since there is no more specific JOIN condition, after this join, the number of rows is already a multiple of the number of rows in the hub. Next, the only satellite attached to the store hub is joined, store_address_crm_lroc_sat. It is joined on the hash key, and additionally, the load_date and load_end_date are leveraged to determine the valid record for a specific SDTS using the BETWEEN function. The SELECT list of this CTE introduces a new concept, a dimensional key, hk_d_store, generated by hashing the store hub hash key and the SDTS. This creates a new unique column that can be used for the primary key constraint and incremental loads. Additionally, both components of this dimensional key, hk_store_hub and snapshot_datetime, are selected. The hash key and load, datetime of the satellite, are also chosen to uniquely identify one row of the satellite. They are renamed to include the satellite's name, which helps when joining multiple satellites instead of just one. A typical PIT always brings together all satellites connected to a specific hub. Therefore, a typical PIT has various combinations of hash key and load_datetime columns. Ultimately, we insert only rows where the new dimensional key does not already exist in the target PIT. This additional clause enables incremental loading. This PIT can now be used as a starting point for a snapshot-based store dimension. To produce a historized (SCD Type 2) store dimension, the PIT is joined with the hub and the satellite: CREATE VIEW InformationMarts.DIM_STORE_SB AS SELECT pit.snapshot_datetime as SnapshotDatetime, hub.store_id as StoreID, sat.address_street as AddressStreet, sat.postal_code as PostalCode, sat.country as Country FROM DV.STORE_BP pit INNER JOIN DV.store_hub hub ON hub.hk_store_hub = pit.hk_store_hub INNER JOIN DV.store_address_crm_lroc_sat sat ON pit.hk_store_address_crm_lroc_sat = sat.hk_store_hub AND pit.load_datetime_store_address_crm_lroc_sat = sat.load_datetime With all history precalculated in our PIT, the actual dimension can be virtual again because the only operation required is an INNER-JOIN. Additional information and patterns about PIT and bridge tables can be found on the Scalefree Blog. Conclusion Data Vault has been designed to integrate data from multiple data sources, creatively destruct the data into its fundamental components, and store and organize it so that any target structure can be derived quickly. This article focused on generating information models, often dimensional models, using virtual entities. They are used in the data architecture to deliver information. After all, dimensional models are easier to consume by dashboarding solutions, and business users know how to use dimensions and facts to aggregate their measures. However, PIT and bridge tables are usually needed to maintain the desired performance level. They also simplify the implementation of dimension and fact entities and, for those reasons, are frequently found in Data Vault-based data platforms. This article completes the information delivery. The following articles will focus on the automation aspects of Data Vault modeling and implementation. <<< Back to Blog Series Title Page View the full article
  16. I operate a business that mostly depends on design by simulation relying on constant operation at very high CPU utilization of big multi-core PC's. And because of the high utilization, I seem to kill them on a fairly routine basis, 3 Dell 7820 Xeon Gold's in the last two years, which is worse than average, I probably kill one on average every 2nd year. We can talk about what's dying separately, it doesn't matter here, the issue at hand is DOWN TIME. View the full article
  17. Hi all, I recently bought a new SSD drive (Samsung 990 Pro 2TB) and I want to install my system on it. Since I already have used my machine for a while but on another SSD (Kingston KC2000 1TB). I don’t want to do complete install of all the software, games, files, drivers and so on, that’s why I think cloning would be better but I’ve done that. Is it good idea to clone my whole Windows ssd to the new ssd? What problems may it occur or will it be flawless? Just asking because I don’t want to waste my time instead of clean install and I will be glad to know your experience with it. View the full article
  18. I have an issue within IT where I frequently use the start menu and type "Check for Updates"... it's sort of the first thing I do when I have clients with windows issues... but more times than not "Check for Updates" gives me Java Updates as a first result and always being rushed, I click on it far too often. Is there any way to make sure the "Check for Updates" java software NEVER shows up in the search results or at the very least falls behind the Windows Check for updates? View the full article
  19. Hi: I'm trying to install Windows 11 in my HP laptop from a USAB recovery. The process stop on the network connection, no finding drivers for the wireless adapter. I found a few solutions on line, like the "Shift + F10" to go to command prompt or activate a virtual machine to do that. No one worked in my case, also the little Accessibility icon on the right corner is disable. In my last try I got a message to install the drivers for the wireless adapter. I downloaded two possible drivers from the HP web support and expanded in my desktop. The problem is the laptop can't see any file in the USB I'm using to copy the drivers. I'll really appreciate any help on this matter, because is already a week with this issue and I really need the laptop for work. Best Nestor View the full article
  20. Hi folks I'm using the Windows 2025 server on a bog standard (decent) laptop as a Workstation removed all the "server" specific stuff such as ctrl,alt,delete to logon, nag screens about log when system is shutdown / rebooted, password restrictions, rdp restrictions etc. You can get 180 day (6 times extendable) free trial if you want to try this too. It's far better than std windows 11, also no bloat. Only Macrium needs a "server" version of software --otherwise everything I need including Office 2021 runs perfectly. Also Hyper-V (good though it is on W11 pro) seems even better on the server ---note though there's no "Quick create VM wizard" but IMHO if you can turn the server int a desktop creatng a HYPERV VM should be chllds play. I'll set up a couple of Linux VM's on this just to test -- Windows VM's are a doddle. Screenshot running on laptop View the full article
  21. Cant't login into Office or Teams on my personal PC, the app just gives me an error code 2603 or a comm saying We can't connect you. Same with trying to access login.microsoft through Chrome, I get an error ERR_NETWORK_ACCESS_DENIED and a comm saying that I have no internet access. Other than that any other website or program works with no issue. Would really appreciate the help. View the full article
  22. Hey, during my setupcomplete.cmd I am performing some windows updates etc. (stuff which requires a restart). Microsoft clearly states not to include a reboot command within setupcomplete.cmd as the windows install process might be interrupted. So what are my options to automatically trigger a restart as soon as windows install process is complete? Current idea would be within setupcomplete.cmd to start a separated, not waited powesrshell which checks if windeploy process is still running and if not fires a shutdown /r /t 60 different ideas with benefits? View the full article
  23. Hello, I would like to have your feedback on moving a folder in documents from a sharepoint site to another site on the same microsoft tenant. I've used several pnp script solutions to download all the content and then send it back via Migration Manager, but it takes a long time. I've tried the move function but it's unreliable, loading runs for hours and reloading the page crashes the move. I used these two solutions to move 1 TB and it was laborious. What's the best practice for this scenario? Translated with DeepL.com (free version) View the full article
  24. i have done performing the Partner Organization Onboarding process. able to get achievement code, and also have the MTM access. however, i cannot see my company showed up in https://appsource.microsoft.com/ for the training services partner section. anyone can advice? View the full article
  25. How can I get rid of this fps counter? I’m not sure how I ended up with it but I cannot for the life of me, find out how to turn it off. View the full article
×
×
  • Create New...