Jump to content
Microsoft Windows Bulletin Board

Windows Server

Active Members
  • Posts

    5710
  • Joined

  • Last visited

Everything posted by Windows Server

  1. With Windows 11 they screwed up the Program further by introducing Controlled Feature Rollouts. They introduced also a “get new updates first” toggle in WU which, thanks to CFR, is partially pointless as well. Because turning that toggle on doesnt mean you will get the new stuff to test right away. And the final nail in the coffin is the fact that some times new features and changes are shipped to consumers before they are shipped to Insiders in any of the 3 channels. I predict some time soon in the near future MSft is going to end the Insider Program. View the full article
  2. Colored folder icons doesn’t seem like a new feature that should appear in an email client that’s been around for a long time, but the new Outlook for Windows and OWA now both offer users the ability to choose different colors for folder icons. Apparently, this is an important step forward in the development of the new Outlook and might just be the killer feature to convince the curmudgeons who use Outlook classic to switch. https://office365itpros.com/2025/03/07/colored-folder-icons/ View the full article
  3. I wanted to do a clean install on my pc with only a change of hard drive. For details see my thread in the antivirus and security listings. Everything went well until Windows refused to pass the hardware requirements. I had downloaded an ISO from the official Microsoft website and burned it to DVD. The hardware has been approved until now with an Athlon 3000G in an Asrock A320M board, (see my computers listed). What is the solution? Have the minimum requirements been raised? View the full article
  4. For safety I have just made a recovery USB for my Win 11 installation. I am trying to test it to see that it boots up OK but I have a simple question about using it. All the files seem to have been created on the stick (pic 1). That looks OK ??. Pic 2 shows the blue screen I get when booting up and using F9 in my case to interrupt boot up. Out of the options the only ones that work are the Windows boot manager (Team) and the UEFI Generic. The Windows boot manager (Team) seems to boot me up in the normal way straight into my normal screen. The UEFI Generic option gets me into the start of a "reinstallation" procedure. 1. Is the UEFI Generic actually the USB stick as I thought that would show up as USB or something? 2. If I proceeded to boot from UEFI Generic option would that lead me into a complete reinstallation and loss of all files and data ??? I used to use Aomfei recovery stick which just booted up the system but I accidentally erased it and the new version of Aofmei does not offer "Create a bootable USB" as an option in the free version. So, does anyone know a similar free program that does allow creating a bootable USB 3. Is there a way of making a W 11 recovery stick that avoids losing all my data and just installs the OS ?? Thanks in advance for any advice you can give. View the full article
  5. I recently swiped a super chill music compilation on YouTube that I particularly liked and wanted to download the audio and put it on my phone to listen to whenever I wanted. However, the video can only be played online, and after searching for half a day, I couldn't find a particularly smooth way. Originally, I thought of directly recording the screen and then transferring the audio, but the sound quality is too general, and there is a lot of background noise. I also tried a few methods I found on the internet, but either the speed is limited or the converted sound quality is not good, and some of them even can't find the download button. 😂 Does anyone know of a stable way to extract audio from YouTube video on Windows 11? I'd like to convert it to MP3 or other common formats so I can listen to it whenever I want. Anyone who has used a reliable tool, please recommend! Thank you! Thank you! View the full article
  6. This Article is Authored By Michael Olschimke, co-founder and CEO at Scalefree International GmbH and Co-authored with Tim Kirschke Senior BI Consultant from Scalefree The Technical Review is done by Ian Clarke and Naveed Hussain – GBBs (Cloud Scale Analytics) for EMEA at Microsoft Introduction In this series' previous blog articles, we created a Raw Data Vault to store our raw data. In addition to capturing and integrating the data, we applied examples of soft business rules inside the Business Vault. This article focuses on using the data from the combined Data Vault model (that is, both the Raw Data Vault and Business Vault) and transforming it into valuable information to provide to business users. Information Delivery The Raw Data Vault and Business Vault capture the raw data from the source systems and the results from the business logic required by the business users. One could argue that the job is done by then. But in reality, end-users typically don’t want to work with the Raw Data Vault or Business Vault entities. The valid reasons for that often include a lack of knowledge in Data Vault modeling and, hence, unclarity about how to query a Data Vault implementation. Additionally, most end-users are already familiar with different data consumption methods. This typically includes dimensional models, such as star schema or snowflake schema, or fully denormalized flat-and-wide tables. In this article, we discuss fact entities and slowly changing dimensions (SCD) 1 and 2. Also, most tools for delivering information, such as dashboarding tools like Microsoft PowerBI or SQL Server Analysis Services to produce OLAP cubes, are easy to use with such models. How to Deliver Information with Data Vault 2.0 Regardless of the desired information delivery format, it can be directly queried out of Raw Data Vault entities. The Data Vault model follows an optimized schema-on-read design where the raw data is stored as-is and transformations, such as business logic and structural changes, are applied during query time. This is true, except that the incoming source data is broken down into the fundamental components: business keys, relationships, and descriptive data. This is the optimization of the storage and it makes the application of business rules much easier and also the transformation into any desired target information schema. Business Vault entities are used during information delivery to apply business rules. In most cases, the raw data is insufficient for reporting: it contains erroneous data, and some data is missing or needs to be converted from one currency to another. However, some of the raw data is good enough for reporting. Therefore, in many cases, information models, such as a dimensional model, would be derived from both the Raw Data Vault and the Business Vault by joining the required entities. Information Delivery requirements typically include a historization requirement. A Slowly Changing Dimension (SCD) Type 1 would only include the current state of descriptive attributes. However, SCD Type 2 would consist of the full history of descriptive attributes. Data Vault follows a multi-temporal approach and leverages multiple timelines to implement such solutions: The load date timestamp is the technical timeline that indicates when data arrived at the data platform. The timeline must be defined (and controlled) by the data platform team. The snapshot timestamp indicates when information should be delivered to the end user. This timeline is regular (e.g., every morning at 8 a.m.) and defined by the business user. Business timelines are inside the source data and indicate when something happened. Examples include birth dates, valid from and to dates, change dates, and deletion dates. Separating these timelines and creating multi-temporal solutions, where some data is back-dated or post-dated, becomes much more straightforward. However, this is beyond the scope of this article. Implementation Walk Through To fulfill the business requirements, let's start as simple as possible. For various reasons, it’s highly recommended that information marts be implemented using SQL views initially and only use physical tables if performance or processing times/costs require it. Other options like PIT and bridge tables typically provide a sufficient (virtualized) solution. We follow this recommendation in this article and start with a dimension view and a fact view. Store Dimension Many dimensions entities are derived from a hub and its satellite. If no business rules are implemented, the Dimension can access directly from the Raw Data Vault entities. For example, the following CREATE VIEW statement implements a SCD Type 1 store dimension: CREATE VIEW InformationMarts.DIM_STORE_SCD1 AS SELECT hub.store_hashkey as StoreKey, hub.store_id as StoreID, sat.address_street as AddressStreet, sat.postal_code as PostalCode, sat.country as Country FROM DV.store_hub hub LEFT JOIN DV.store_address_crm_lroc_sat sat ON hub.hk_store_hub = sat.hk_store_hub WHERE sat.is_current = 1 This simple query accesses the store_hub and joins it to the store_address satellite. It selects the business key from the hub because typical business users want to include it in the dimension. In addition, it renames all descriptive attributes from the satellite to make them more readable. The hash key is added for efficient joins from Fact entities. In the end, a WHERE clause leverages the is_current flag in the satellite to only include the latest descriptive data. This flag is calculated in a view on top of the actual satellite table. Thus, the view is joined, not the table. Only this specific WHERE clause makes this dimension of SCD type 1. Leaving it away would automatically lead to an SCD type 2! However, in such a case, it would make sense to include the load_date and load_end_date of the satellite view additionally. Transaction Fact The following CREATE VIEW statement implements a fact entity. In this simple example, no aggregations are defined. The granularity of the derived fact entity matches the underlying data from the non-historized link. Therefore, the fact view can be directly derived from the non-historized link without the requirement for a grain shift, e.g., a GROUP BY clause: CREATE VIEW InformationMarts.FACT_STORE_TRANSACTIONS AS SELECT nl.transaction_id as TransactionID, s_hub.store_hashkey as StoreKey, c_hub.customer_hashkey as CustomerKey, nl.transaction_date as TransactionDate, nl.amount as Amount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.store_hub s_hub ON nl.hk_store_hub = s_hub.hk_store_hub LEFT JOIN DV.customer_hub c_hub ON nl.hk_customer_hub = c_hub.hk_customer_hub This query selects from the non-historized link and joins both hubs via the hashkeys. From these hubs, the hash keys are assigned. From the non-historized link, the relevant transaction details are chosen. A filter for historization is not required because both hubs and non-historized links only capture non-changing data. Capturing changing facts, which in theory should never happen but might happen in reality, is also possible using non-historized links but beyond the scope of this article. Pre-Calculated Aggregations In most business environments, BI developers would now connect their reporting tool of choice against our provided dimensional model to create custom reports. It's common to aggregate data to calculate sums, counts, averages, or other aggregated values, especially for fact data. Depending on the data volume, the reporting tool, and the aggregation complexity, this might be a challenge for business users. To simplify usage and optimize query performance in some cases, a pre-aggregation in the dimensional layer might be the best choice. For example, the CREATE VIEW statement implements another store transaction fact view that already includes the requested aggregations. Since aggregations are always based on a GROUP BY clause, the following views implement both grain shifts to calculate the number and amount of transactions on the different dimensions of store and customer: CREATE VIEW InformationMarts.FACT_AGG_STORE_TRANSACTIONS AS SELECT s_hub.store_hashkey as StoreKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.store_hub s_hub ON nl.hk_store_hub = s_hub.hk_store_hub GROUP BY s_hub.store_hashkey CREATE VIEW InformationMarts.FACT_AGG_CUSTOMER_TRANSACTIONS AS SELECT c_hub.customer_hashkey as CustomerKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl LEFT JOIN DV.customer_hub c_hub ON nl.hk_customer_hub = c_hub.hk_customer_hub GROUP BY c_hub.customer_hashkey In both these queries, only one hub is required. The hash key of each hub is used for the GROUP BY clause, and three basic aggregations are applied to determine the count of transactions and calculate the sum and average amount of transactions. While this reduces the workload on the business user side, this implementation might still be slow or produce high processing costs. So, it would make sense to start materializing this aggregated fact entity or to introduce a bridge table. A bridge table is similar to a pre-aggregated fact table in dimensional models. However, it is much more customizable as it only implements the grain shift operation (in this case, the GROUP BY clause), measure calculations, and timelines. It also contains the hub references, which will be turned into dimension references, as seen in the previous examples. The definition of the bridge table is provided in the following statement: CREATE TABLE [DV].[CUSTOMER_TRANSACTIONS_BB] ( SnapshotDate DATETIME2(7) NOT NULL, CustomerKey CHAR(32) NOT NULL, TransactionCount BIGINT NOT NULL, AverageAmount MONEY NOT NULL ); The code to load a bridge table is similar to the fact view: INSERT INTO [DV].[CUSTOMER_TRANSACTIONS_BB] SELECT SYSDATETIME() as SnapshotDate nl.customer_hashkey as CustomerKey, COUNT(nl.transaction_id) as TransactionCount, SUM(nl.amount) as TotalAmount, AVG(nl.amount) as AverageAmount FROM DV.store_transaction_nlnk nl GROUP BY nl.customer_hashkey; The bridge table might also contain complex business calculations in many other cases. Still, the focus is on the grain shift operation, which takes a reasonable amount of time on many traditional database systems due to their row-based storage. However, Microsoft Fabric uses a different storage format optimized for aggregations but typically at the price of joins. The bridge table aims to improve the query performance of fact entities. In turn, that means it is ok to pre-join other data into the bridge table if the join performance is insufficient. A common requirement is the addition of a time dimension. Snapshot-Based Information Delivery So far, the store dimension presented in this article was an SCD Type 1 dimension - a dimension without history. However, in many cases, businesses want to relate facts to the dimension’s member version of the time the fact occurred. For example, an order was issued before the customer relocated to another state. In a Type 1 scenario, the order’s revenue would be associated with the customer's current state. However, this might not be correct, depending on the information requirements. In such cases, the revenue should be associated with the customer's state at the time of the transaction. This information requirement demands an SCD Type 2 dimension with history. Point-in-time (PIT) tables are recommended to produce such dimensions efficiently. This section discusses the necessary steps to create such a table. A good starting point is a date table. This table is a reference table for dates and can produce a date dimension and populate the PIT table. The following statement creates the table and initializes it with dates between 1970 and 2099: CREATE SCHEMA CONTROL; CREATE TABLE CONTROL.Ref_Date_v0 (snapshot_datetime datetime2(6), snapshot_date date, year int, month int, quarter int, week int, day int, day_of_year int, week_day int, beginning_of_year bit, beginning_of_quarter bit, beginning_of_month bit, beginning_of_week bit, end_of_year bit, enf_of_quarter bit, end_of_month bit, end_of_week bit) WITH date_base AS ( SELECT n FROM (VALUES (0),(1),(2),(3),(4),(5),(6),(7),(8),(9)) v(n) ), date_basic as ( SELECT TOP (DATEDIFF(DAY, '1970-01-01', '2099-12-31') + 1) ROW_NUMBER() OVER (ORDER BY (SELECT NULL)) - 1 AS rn FROM date_base ones, date_base tens, date_base hundreds, date_base thousands ORDER BY 1 ), snapshot_base AS ( select cast(dateadd(day, rn - 1, '2020-01-01 07:00:00') as datetime2) as snapshot_datetime, cast(dateadd(day, rn - 1, '2020-01-01') as date) as snapshot_date from date_basic ), snapshot_extended AS ( SELECT snapshot_datetime, snapshot_date, DATEPART(YEAR, snapshot_date) as year, DATEPART(MONTH, snapshot_date) as month, DATEPART(QUARTER, snapshot_date) as quarter, DATEPART(WEEK, snapshot_date) as week, DATEPART(DAY, snapshot_date) as day, DATEPART(DAYOFYEAR, snapshot_date) as day_of_year, DATEPART(WEEKDAY, snapshot_date) as week_day FROM snapshot_base ) INSERT INTO CONTROL.Ref_Date_v0 SELECT *, CASE WHEN day_of_year = 1 THEN 1 ELSE 0 END as beginning_of_year, CASE WHEN day = 1 AND month in (1, 4, 7, 10) THEN 1 ELSE 0 END as beginning_of_quarter, CASE WHEN day = 1 THEN 1 ELSE 0 END as beginning_of_month, CASE WHEN week_day = 2 THEN 1 ELSE 0 END as beginning_of_week, CASE WHEN snapshot_date = EOMONTH(snapshot_date) AND month = 12 THEN 1 ELSE 0 END as end_of_year, CASE WHEN snapshot_date = EOMONTH(snapshot_date) AND month in (3, 6, 9, 12) THEN 1 ELSE 0 END as end_of_quarter, CASE WHEN snapshot_date = EOMONTH(snapshot_date) THEN 1 ELSE 0 END as end_of_month, CASE WHEN week_day = 1 THEN 1 ELSE 0 END as end_of_week FROM snapshot_extended The first part is a simple DDL statement that creates the reference date table. This is followed by an INSERT statement that leverages multiple Common Table Expressions (CTEs) to simplify the logic. The first CTE date_base simply generates a list of the numbers 1 to 10, followed by the CTE date_basic, which CROSS JOINs the previous CTE four times, creating 10 * 10 * 10 * 10 = 10000 rows. A ROW_NUMBER() transforms the numbers into an ascending number ranging from 1 to 10000. The next CTE snapshot_base uses this ascending number to execute a DATEADD() function on top of a specified start date, '2020-01-01 07:00:00', to generate a list of daily dates. This is done once in datatype datetime2 and once in datatype date. The last CTE snapshot_extended adds metadata like MONTH, YEAR, etc. Lastly, boolean columns, which mark the beginning and end of weeks, months, quarters, and years., are added. Everything is then inserted into the reference date table. This reference date table can now create and load a Point-In-Time (PIT) Table. The PIT table precalculates for each snapshot date timestamp (SDTS), which satellite entry is valid for each business key. The granularity of a PIT is (number of snapshots) * (number of business keys) = row count in PIT. The following code creates and populates a simple PIT example for stores: CREATE TABLE DV.STORE_BP (hk_d_store CHAR(32) NOT NULL, hk_store_hub CHAR(32) NOT NULL, snapshot_datetime datetime2(6) NOT NULL, hk_store_address_crm_lroc_sat CHAR(32) NULL, load_datetime_store_address_crm_lroc_sat datetime2(6) NULL); WITH pit_entries AS ( SELECT CONVERT(CHAR(32), HASHBYTES('MD5', CONCAT(hub.hk_store_hub, '||', date.snapshot_datetime)), 2) as hk_d_store, hub.hk_store_hub, date.snapshot_datetime, COALESCE(sat1.hk_store_hub, '00000000000000000000000000000000') as hk_store_address_crm_lroc_sat, COALESCE(sat1.load_datetime, CONVERT(DATETIME, '1900-01-01T00:00:00', 126)) as load_datetime_store_address_crm_lroc_sat FROM DV.store_hub hub INNER JOIN CONTROL.Ref_Date_v0 date ON hub.load_datetime <= date.snapshot_datetime LEFT JOIN DV.store_address_crm_lroc_sat sat1 ON hub.hk_store_hub = sat1.hk_store_hub AND date.snapshot_datetime BETWEEN sat1.load_datetime and sat1.load_end_datetime ) INSERT INTO DV.STORE_BP SELECT * FROM pit_entries new WHERE NOT EXISTS (SELECT 1 FROM DV.STORE_BP pit WHERE pit.hk_d_store = new.hk_d_store) The one CTE pit_entries defines the whole set of PIT entries. The store hub is joined against the snapshot table only when the hub appears before the SDTS to reduce the number of rows. But since there is no more specific JOIN condition, after this join, the number of rows is already a multiple of the number of rows in the hub. Next, the only satellite attached to the store hub is joined, store_address_crm_lroc_sat. It is joined on the hash key, and additionally, the load_date and load_end_date are leveraged to determine the valid record for a specific SDTS using the BETWEEN function. The SELECT list of this CTE introduces a new concept, a dimensional key, hk_d_store, generated by hashing the store hub hash key and the SDTS. This creates a new unique column that can be used for the primary key constraint and incremental loads. Additionally, both components of this dimensional key, hk_store_hub and snapshot_datetime, are selected. The hash key and load, datetime of the satellite, are also chosen to uniquely identify one row of the satellite. They are renamed to include the satellite's name, which helps when joining multiple satellites instead of just one. A typical PIT always brings together all satellites connected to a specific hub. Therefore, a typical PIT has various combinations of hash key and load_datetime columns. Ultimately, we insert only rows where the new dimensional key does not already exist in the target PIT. This additional clause enables incremental loading. This PIT can now be used as a starting point for a snapshot-based store dimension. To produce a historized (SCD Type 2) store dimension, the PIT is joined with the hub and the satellite: CREATE VIEW InformationMarts.DIM_STORE_SB AS SELECT pit.snapshot_datetime as SnapshotDatetime, hub.store_id as StoreID, sat.address_street as AddressStreet, sat.postal_code as PostalCode, sat.country as Country FROM DV.STORE_BP pit INNER JOIN DV.store_hub hub ON hub.hk_store_hub = pit.hk_store_hub INNER JOIN DV.store_address_crm_lroc_sat sat ON pit.hk_store_address_crm_lroc_sat = sat.hk_store_hub AND pit.load_datetime_store_address_crm_lroc_sat = sat.load_datetime With all history precalculated in our PIT, the actual dimension can be virtual again because the only operation required is an INNER-JOIN. Additional information and patterns about PIT and bridge tables can be found on the Scalefree Blog. Conclusion Data Vault has been designed to integrate data from multiple data sources, creatively destruct the data into its fundamental components, and store and organize it so that any target structure can be derived quickly. This article focused on generating information models, often dimensional models, using virtual entities. They are used in the data architecture to deliver information. After all, dimensional models are easier to consume by dashboarding solutions, and business users know how to use dimensions and facts to aggregate their measures. However, PIT and bridge tables are usually needed to maintain the desired performance level. They also simplify the implementation of dimension and fact entities and, for those reasons, are frequently found in Data Vault-based data platforms. This article completes the information delivery. The following articles will focus on the automation aspects of Data Vault modeling and implementation. <<< Back to Blog Series Title Page View the full article
  7. I operate a business that mostly depends on design by simulation relying on constant operation at very high CPU utilization of big multi-core PC's. And because of the high utilization, I seem to kill them on a fairly routine basis, 3 Dell 7820 Xeon Gold's in the last two years, which is worse than average, I probably kill one on average every 2nd year. We can talk about what's dying separately, it doesn't matter here, the issue at hand is DOWN TIME. View the full article
  8. Hi all, I recently bought a new SSD drive (Samsung 990 Pro 2TB) and I want to install my system on it. Since I already have used my machine for a while but on another SSD (Kingston KC2000 1TB). I don’t want to do complete install of all the software, games, files, drivers and so on, that’s why I think cloning would be better but I’ve done that. Is it good idea to clone my whole Windows ssd to the new ssd? What problems may it occur or will it be flawless? Just asking because I don’t want to waste my time instead of clean install and I will be glad to know your experience with it. View the full article
  9. I have an issue within IT where I frequently use the start menu and type "Check for Updates"... it's sort of the first thing I do when I have clients with windows issues... but more times than not "Check for Updates" gives me Java Updates as a first result and always being rushed, I click on it far too often. Is there any way to make sure the "Check for Updates" java software NEVER shows up in the search results or at the very least falls behind the Windows Check for updates? View the full article
  10. Hi: I'm trying to install Windows 11 in my HP laptop from a USAB recovery. The process stop on the network connection, no finding drivers for the wireless adapter. I found a few solutions on line, like the "Shift + F10" to go to command prompt or activate a virtual machine to do that. No one worked in my case, also the little Accessibility icon on the right corner is disable. In my last try I got a message to install the drivers for the wireless adapter. I downloaded two possible drivers from the HP web support and expanded in my desktop. The problem is the laptop can't see any file in the USB I'm using to copy the drivers. I'll really appreciate any help on this matter, because is already a week with this issue and I really need the laptop for work. Best Nestor View the full article
  11. Hi folks I'm using the Windows 2025 server on a bog standard (decent) laptop as a Workstation removed all the "server" specific stuff such as ctrl,alt,delete to logon, nag screens about log when system is shutdown / rebooted, password restrictions, rdp restrictions etc. You can get 180 day (6 times extendable) free trial if you want to try this too. It's far better than std windows 11, also no bloat. Only Macrium needs a "server" version of software --otherwise everything I need including Office 2021 runs perfectly. Also Hyper-V (good though it is on W11 pro) seems even better on the server ---note though there's no "Quick create VM wizard" but IMHO if you can turn the server int a desktop creatng a HYPERV VM should be chllds play. I'll set up a couple of Linux VM's on this just to test -- Windows VM's are a doddle. Screenshot running on laptop View the full article
  12. Cant't login into Office or Teams on my personal PC, the app just gives me an error code 2603 or a comm saying We can't connect you. Same with trying to access login.microsoft through Chrome, I get an error ERR_NETWORK_ACCESS_DENIED and a comm saying that I have no internet access. Other than that any other website or program works with no issue. Would really appreciate the help. View the full article
  13. Hey, during my setupcomplete.cmd I am performing some windows updates etc. (stuff which requires a restart). Microsoft clearly states not to include a reboot command within setupcomplete.cmd as the windows install process might be interrupted. So what are my options to automatically trigger a restart as soon as windows install process is complete? Current idea would be within setupcomplete.cmd to start a separated, not waited powesrshell which checks if windeploy process is still running and if not fires a shutdown /r /t 60 different ideas with benefits? View the full article
  14. Hello, I would like to have your feedback on moving a folder in documents from a sharepoint site to another site on the same microsoft tenant. I've used several pnp script solutions to download all the content and then send it back via Migration Manager, but it takes a long time. I've tried the move function but it's unreliable, loading runs for hours and reloading the page crashes the move. I used these two solutions to move 1 TB and it was laborious. What's the best practice for this scenario? Translated with DeepL.com (free version) View the full article
  15. i have done performing the Partner Organization Onboarding process. able to get achievement code, and also have the MTM access. however, i cannot see my company showed up in https://appsource.microsoft.com/ for the training services partner section. anyone can advice? View the full article
  16. How can I get rid of this fps counter? I’m not sure how I ended up with it but I cannot for the life of me, find out how to turn it off. View the full article
  17. I recently transferred a bunch of photos from my iPhone to my Mac, and found that they were all in HEIC format, a total of more than 2,000 photos! 😭 Now I want to convert heic to JPG on Mac, but here comes the problem: converting them one by one manually is completely unrealistic and too painful... I tried using Preview to batch convert, but it was very slow, the Mac fan was spinning wildly, and it occasionally crashed. Then I tried several online conversion tools, but either they had file number limits or were too troublesome to upload and download. Is there any efficient and hassle-free way to quickly convert so many HEIC to JPG on Mac at once? It would be best if the original image quality could be retained, and I don’t want it to be compressed and blurred. Is there any big guy who can share his practical experience! 🙏 View the full article
  18. Hi all, newbie here (have been sitting on the sidelines watching and learning for some time tho!) please excuse my error if this appears in the wrong thread. I have been attempting to install version 22631.2506 via ISO install to my notebook. Previous attempts to by pass the MS Account requirement have not been an issue. Version 23H2 appears that MS have used the anti and blocked all possible options including Rufus & NT Lite versions. I am hoping to be proven wrong, although with my upwards of 50 attempts to by pass MS Account have been unsuccessful. When attempting to bypass connecting to the net, or bypass NRO via CMD the system returns you back the start of the on-boarding process in a continuous loop! View the full article
  19. Is there a way to disable Chrome Browser to stop opening tabs for each link that I click on? View the full article
  20. I've been trying to do an in place upgrade of my Windows 11 system to see if it will fix some issues I am having. I have tried the 'old' in place upgrade using a 24H2 ISO and also tried the newer Settings/System/Recovery Fix problems using Windows Update /Reinstall feature. The upgrade goes fine in both cases but when the system reboots to Windows it gets a BSOD INACCESSIBLE_BOOT_DEVICE. It appears the upgrade has mucked around with my boot device so that it can't be found. My boot device is setup properly in my BIOS and I've had no problems of this sort other than doing this repair install. When I reboot again things go into revert back to previous version mode to what I had before. So this isn't your run of the mill boot device problem. As I said the install must have made my boot drive inaccessible in some way. After I'm back to my previous version it boots just fine. Also restoring a disk image I took prior to the repair install boots fine. So it must be the install playing games on me. Maybe it's downloading some incorrect disk controller drivers. I boot from a NVMe M.2 PCIe Gen4 SSD. Any thoughts on this? BTW... I did a clean install with no problems. I didn't stay with this as I have way to many things to reinstall and don't have the time or where with all to do it. View the full article
  21. En la segunda sesión del GitHub Copilot Bootcamp LATAM, organizado por Microsoft Reactor, el ingeniero Manuel Ortiz, Embajador de Microsoft Learn y líder comunitario en GitHub, guió a desarrolladores en la creación de una aplicación web con capacidades de inteligencia artificial. Este taller práctico combinó fundamentos de desarrollo backend en Python con técnicas avanzadas de integración de modelos de lenguaje de Azure OpenAI. Introducción a Azure Open AI Azure Open AI es una colaboración entre Microsoft y OpenAI que permite a los desarrolladores integrar modelos avanzados de inteligencia artificial en sus aplicaciones utilizando la infraestructura de Azure. Esto ofrece acceso a modelos poderosos como GPT-4, que pueden ser utilizados para una variedad de tareas, desde procesamiento de lenguaje natural hasta generación de texto. Configuración de Azure Open AI Para comenzar a usar Azure Open AI, debes seguir algunos pasos básicos: Crear una Cuenta en Azure: Si aún no tienes una cuenta, puedes crear una en el portal de Azure. Los estudiantes pueden solicitar créditos gratuitos para usar los servicios de Azure. Crear un Servicio Azure Open AI: Accede al portal de Azure y busca "Azure Open AI". Haz clic en "Crear" y selecciona tu suscripción y grupo de recursos. Elige la región y configura el nombre del servicio, que debe ser alfanumérico y sin caracteres especiales. Selecciona el plan de precios adecuado y finaliza la creación del servicio. Obtener las Credenciales: Después de crear el servicio, necesitarás las credenciales (clave de API y endpoint) para autenticar tus solicitudes. Estas credenciales se pueden encontrar en la sección de "Claves y Endpoints" del servicio creado. Integración con Python y Flask Python es uno de los lenguajes de programación más populares para el desarrollo de aplicaciones de inteligencia artificial debido a su simplicidad y vasta biblioteca de herramientas. Durante la configuración, puedes usar varias bibliotecas y herramientas que facilitan el desarrollo de IA con Python, incluyendo: TensorFlow: Una biblioteca de código abierto para aprendizaje automático. Keras: Una API de alto nivel para redes neuronales, que funciona sobre TensorFlow. Scikit-learn: Una biblioteca para aprendizaje automático en Python. Flask: Un microframework para desarrollo de aplicaciones web. Una vez configurado el servicio Azure Open AI, puedes integrarlo en tus aplicaciones Python usando Flask. Aquí tienes un ejemplo de cómo hacerlo: Instalación de las Bibliotecas Necesarias: Crea un entorno virtual e instala las bibliotecas necesarias, como flask y openai. Configuración del Proyecto: Crea un archivo .env para almacenar tus credenciales de forma segura. Configura tu aplicación Flask para cargar estas credenciales y conectarse al servicio Azure Open AI. Creación del Modelo de IA: Utiliza la biblioteca openai para enviar prompts al modelo y recibir respuestas. Integra estas respuestas en tu aplicación web para proporcionar funcionalidades de IA a los usuarios. Ejemplo de Código Aquí tienes un ejemplo simplificado de cómo configurar y usar Azure Open AI en una aplicación Flask: from flask import Flask, request, render_template import openai import os app = Flask(__name__) # Cargar las credenciales del archivo .env openai.api_key = os.getenv("AZURE_OPENAI_API_KEY") openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") @app.route("/", methods=["GET", "POST"]) def index(): response_text = "" if request.method == "POST": prompt = request.form["prompt"] response = openai.Completion.create( engine="text-davinci-003", prompt=prompt, max_tokens=100 ) response_text = response.choices.text.strip() return render_template("index.html", response_text=response_text) if __name__ == "__main__": app.run(debug=True) Beneficios de Azure Open AI Acceso a Modelos Avanzados: Utiliza los modelos más recientes y poderosos de OpenAI. Escalabilidad: La infraestructura de Azure permite escalar tus aplicaciones según sea necesario. Seguridad y Conformidad: Benefíciate de las robustas medidas de seguridad y conformidad de Azure. Sigue aprendiendo Si deseas aprender más sobre estas técnicas, mira las grabaciones del GitHub Copilot Bootcamp, comienza a utilizar el GitHub Copilot gratuito y descubre cómo transformar tu manera de programar utilizando inteligencia artificial. View the full article
  22. This article is part of a series of articles on API Management and Generative AI. We believe that adding Azure API Management to your AI projects can help you scale your AI models, make them more secure and easier to manage. In this article, we will shed some light on capabilities in API Management, which are designed to help you govern and manage Generative AI APIs, ensuring that you are building resilient and secure intelligent applications. But why exactly do I need API Management for my AI APIs? Common challenges when implementing Gen AI-Powered solutions include: - Quota, (calculated in tokens-per-minute (TPM)), allocation across multiple client apps, How to control and track token consumption for all users, Mechanisms to attribute costs to specific client apps, activities, or users, Your systems resiliency to backend failures when hitting one or more limits And the list goes on with more challenges and questions. Well, let’s find some answers, shall we? Quota allocation Take a scenario where you have more than one client application, and they are talking to one or more models from Azure OpenAI Service or Azure AI Foundry. With this complexity, you want to have control over the quota distribution for each of the applications. Tracking Token usage & Security I bet you agree with me that it would be unfortunate if one of your applications (most likely that which gets the highest traffic), hogs up all the TPM quota leaving zero tokens remaining for your other applications, right? If this occurs though, there is a high chance that it might be a DDOS Attack, with bad actors trying to bombard your system with purposeless traffic causing service downtime. Yet another reason why you will need more control and tracking mechanisms to ensure this doesn’t happen. Token Metrics As a data-driven company, having additional insights with flexibility to dissect and examine usage data down to dimensions like subscription ID or API ID level is extremely valuable. These metrics go a long way in informing capacity and budget planning decisions. Automatic failovers This is a common one. You want to ensure that your users experience zero service downtime, so if one of your backends is down, does your system architecture allow automatic rerouting and forwarding to healthy services? So, how will API Management help address these challenges? API Management has a set of policies and metrics called Generative AI (Gen AI) gateway capabilities, which empower you to manage and have full control of all these moving pieces and components of your intelligent systems. Minimize cost with Token-based limits and semantic caching How can you minimize operational costs for AI applications as much as possible? By leveraging the `llm-token limit` policy in Azure API Management, you can enforce token-based limits per user on identifiers such as subscription keys and requesting IP addresses. When a caller surpasses their allocated tokens-per-minute quota, they receive a HTTP "Too Many Requests" error along with ‘retry-after’ instructions. This mechanism ensures fair usage and prevents any single user from monopolizing resources. To optimize cost consumption for Large Language Models (LLMs), it is crucial to minimize the number of API calls made to the model. Implementing the `llm-semantic-cache-store` policy and `llm-semantic-cache-lookup` policies allow you to store and retrieve similar completions. This method involves performing a cache lookup for reused completions, thereby reducing the number of calls sent to the LLM backend. Consequently, this strategy helps in significantly lowering operational costs. Ensure reliability with load balancing and circuit breakers Azure API Management allows you to leverage load balancers to distribute the workload across various prioritized LLM backends effectively. Additionally, you can set up circuit breaker rules that redirect requests to a responsive backend if the prioritized one fails, thereby minimizing recovery time and enhancing system reliability. Implementing the semantic-caching policy not only saves costs but also reduces system latency by minimizing the number of calls processed by the backend. Okay. What Next? This article mentions these capabilities at a high level, but in the coming weeks, we will publish articles that go deeper into each of these generative AI capabilities in API Management, with examples of how to set up each policy. Stay tuned! Do you have any resources I can look at in the meantime to learn more? Absolutely! Check out: - Manage your Azure OpenAI APIs with Azure API Management http://aka.ms/apimlove View the full article
  23. I am a member of the Microsoft 365 Developer Program, and my account’s OneDrive feature has been blocked due to a violation of the Acceptable Use Policy. I have contacted the support team via the M365 Admin Center, but they have not provided a specific reason for the block. As a developer, OneDrive is critical to my work, and this restriction is significantly impacting my productivity. I urgently need this issue resolved. I am reaching out to this community for assistance, hoping someone can help me understand the reason for the block and guide me on how to restore OneDrive functionality. Furthermore, even if the block cannot be lifted immediately, I sincerely request at least being allowed to retrieve my data from OneDrive. This data is vital to my projects, and losing access to it would cause significant issues. If anyone has encountered a similar issue or knows how to address this, please share your insights. Thank you for your time and consideration. View the full article
  24. Hi everyone, I'm indie author who is just getting started. I recently finished an ebook and would like to add my personal logo as a watermark or cover element in my own PDF version. However, I'm not too familiar with PDF editing tools, and the tutorials I found online are either complicated in terms of steps or require paid software, which I'd prefer to add logo watermark in pdf in a free or low-cost way. I insert the logo paste in Word and then converted to PDF, but the results in typographical errors; And the file size is too large by doing this with online pdf watermark tools. Although I know that this may be a basic problem, but I really do not have a clue. I implore you to share how to do this. View the full article
×
×
  • Create New...