Tuesday 9:00 AM - 5:00 PM · Room 344
Design a Well-Architected Fabric Solution: A Medallion First Approach
Transform your data solutions with a streamlined Medallion Architecture using Microsoft Fabric. This session is tailored for professionals familiar with Power BI and basic dataflows, providing a step-by-step guide to implementing Bronze, Silver, and Gold layers for a scalable and maintainable pipeline. Learn how to evolve an unstructured dataflow and semantic model into a comprehensive architecture. Familiarity with Python or SQL is a bonus but not required.
Modules: • Overview of Medallion Architecture: Gain a clear understanding of Warehouses and Lakehouses, their role within Microsoft Fabric, and how they enable Medallion Architecture. • Understanding OneLake: Dive into OneLake and explore its foundational storage structure, including Delta tables and Parquet. Understand key features like columnar storage, Delta optimizations, and performance enhancements through simple, no-code explanations. • Feature Showdown: Compare and contrast key tools in Microsoft Fabric, such as SQL vs. Spark, Notebooks vs. Dataflows vs. Pipelines, and Warehouses vs. Lakehouses, to determine the best fit for your scenarios. • Well-Designed Architecture & Best Practices: Learn about GIT integration, monitoring techniques, and actionable best practices for designing scalable and maintainable data architectures.
Labs: • Build a Lakehouse and create a Medallion Architecture pipeline. • Extract raw data using Pipelines (Bronze). • Clean and transform data with Spark Notebooks (Silver). • Add business logic with T-SQL (Gold). • Create orchestration and monitor pipelines for optimal performance. • Bonus Lab: Create and manage a semantic model and integrate it with GIT for version control.
Tuesday 1:30 PM - 5:00 PM · Room 347
Query Store and Azure SQL Copilot, who is the fairest in the land?
Query store + Azure SQL Copilot, which is the baddest query in my instance.
On-premises or Azure, it doesn't matter, Query Store will help you finding out how your queries are performing.
Get your performance to the next level knowing how to digg into Query Store data. Understand how it works, and how you can use Query Store data to solve performance issues and detect problems before they create a situation.
Do you want to know how to mine Query Store: plans, queries? Learn how to get the best of it in an instance with several databases.
We will also discuss the available options for troubleshooting using Azure SQL Copilot
Based on Microsoft support experience.
Wednesday 2:00 PM - 3:10 PM · Room 344
PowerBI, DirectQuery and SQL Server. It is a good choice?
You will learn best practices, tips and tricks on how to successfully use SQL databases (OnPremise,IaaS, PaaS,SQL Managed Instance) with PowerBI on production environments.
How improve the performance, using for example, Read-Scale, HyperScale or Synapse, partitioning, ColumnStoreIndexes, Indexed views, etc..
How to monitor and diagnostic your database and find out issues with Query Data Store. These learnings are fruit of Microsoft CSS support cases, and customer field engagements.
Wednesday 4:00 PM - 5:10 PM · Room 343
Code Changes That Eliminate SQL Server Performance Complaints
Your queries are running so slowly that people are upset. You need some help!
Over the past twenty years, I have focused my career on SQL Server performance and query tuning. I have learned that you can rewrite slowly running queries to get the same results while significantly reducing compute and run-time. Most slowly running queries fall into familiar patterns DBAs like to call Anti-Patterns. We will cover how to identify several patterns and include solutions you can implement to improve your query performance today!
Wednesday 4:00 PM - 5:10 PM · Room 345-346
Leveraging Large Language Models with Power BI
Large Language Models, like ChatGPT, have the potential to transform how you develop and deliver Power BI solutions. In this session, you'll learn how to integrate Azure OpenAI and GitHub Copilot into your Power BI development process, leveraging prompt engineering techniques to improve your solutions.
Thursday 8:30 AM - 9:40 AM · Room 342
Oracle/SQL to Fabric Migration accelerator
This session explores a streamlined approach to ingesting data from Oracle and SQL databases into Microsoft Fabric’s OneLake using an QMigrator our in-house built data migration product. The session would focus on how to automate data extraction, transformation, and ingestion while ensuring data integrity and governance. QMigrator provides a structured process to transform the data into Fabric one lake format. Key takeaways include schema mapping, automated monitoring, and cost-efficient scalability. Attendees will get a detailed overview about QMigrator in setting up pipelines, handling incremental loads, and enabling self-service analytics using Fabric’s integrated ecosystem.
Thursday 10:20 AM - 11:30 AM · Room 344
Microsoft Fabric and Azure Health Data Services
Bringing together diverse health data sources is essential for delivering healthcare solutions. Microsoft Fabric and Azure Health Data Services make it possible to integrate, manage, and analyze this data within a single platform to support patient care, claims processing, provider targeting, and more.
First, we'll explore the challenges in accessing and harmonizing various healthcare data types, highlighting how gaps in information can limit effective claims management, provider targeting, and timely interventions.
Next, we'll dive into how Fabric's advanced analytics capabilities enable the import and transformation of complex health datasets, including claims data and social determinants of health (SDOH).
Finally, we'll cover Azure Health Data Services' tools for securely exporting and managing health data, enhancing patient outreach, and supporting analytics for provider performance and network optimization. We'll see how combining Fabric's analytics with these services supports proactive care management, efficient claims handling, and strategic provider targeting.
Attendees will leave with a clear plan to use Microsoft Fabric and Azure Health Data Services for comprehensive healthcare data integration and analytics, enabling better patient care, improved claims processing, and effective provider engagement.
Thursday 2:00 PM - 3:10 PM · Room 320-321
Handling Big Data with Power BI
When you started working in Power BI you only had a few million rows or data or the data latency requirements were non-existent. Now all of that has change, the data volume is billions of rows and/or data latency must be less than 5 seconds. How do you manage these challenges with Power BI. Join this demo-heavy session where we will explain and demonstrate how.
Thursday 4:00 PM - 5:10 PM · Room 342
Indexing for Performance
What does the optimizer actually do with an index, what do the index structures look like, and what can e do to optimize index performance? This session covers index internals and optimizer limitations
Thursday 4:00 PM - 5:10 PM · Room 345-346
Deep Dive on Power BI, Teams and SharePoint
Microsoft Teams, SharePoint and Power BI can be tightly integrated within Microsoft Fabric. SharePoint can be a data source (lists), a container for data files (Excel. CSV etc in libraries), and as a dashboarding platform (pages). Teams can be a complete front end for reports, and host content contextually. Fabric can take your SharePoint data to new heights altogether.
This demo rich session will explore all of these scenarios in great depth. SharePoint data can be finicky to retrieve, and this session will show examples and suggest a few best practices for doing so. In addition, connecting Fabric to SharePoint opens up a whole new world for Excel.
Friday 9:00 AM - 10:10 AM · Room 320-321
Azure SQL Database Hyperscale elastic pools - a deep-dive
Azure SQL Database offers a very popular deployment option called elastic pools, to help ease the challenges around right-sizing and cost-optimizing resources for a group of databases. In this session, we will dive deep into the latest generation of elastic pools which leverage the Hyperscale cloud-native architecture. Starting with a quick overview of the motivation for using elastic pools, and a quick recap of the Hyperscale tech, we will use demos to show you:
- How Hyperscale elastic pools ("HSEP") implement resource sharing...
- ... while maintaining strong isolation between databases - What are the performance and capacity limits of each HSEP
- How you can proactively control "noisy neighbor" databases in a HSEP
- How HSEP scales vertically and / or horizontally, and what the impact of such scaling is on your workloads
- How to monitor HSEP effectively using DMVs, Azure Monitor and Database Watcher
- How backups, high availability, and disaster recovery work for databases in a HSEP
- Last but not the least, how HSEP helps in cost optimization - and what you should watch out for as well to manage TCO.
To make best use of this session, some Azure SQL knowledge would help, but it's not necessary. Anyone who plans to run databases in Azure SQL should be aware of elastic pools so that they can benefit from the optimizations they provide. This session will directly help you in understanding how HSEP work and how they may be beneficial to your scenario.
Friday 10:20 AM - 11:30 AM · Room 340-341
Securing Azure PaaS Network Communications
Many companies use one or more Platform as a Service (PaaS) offerings when working with Microsoft Azure. However, these companies don't want to allow the network traffic to these PaaS services to go over the public Internet. In this session, we will learn more about why companies want to secure this network traffic and, more importantly, how to secure this traffic and what application changes need to be made to use these private connections.
Friday 12:30 PM - 1:40 PM · Room 340-341
Secure, Compliant, and Connected: How Microsoft 365 Copilot for Sales Manages Your Data and Insights
Microsoft 365 Copilot for Sales harnesses the power of generative AI and role-based agents to seamlessly integrate with leading CRM platforms such as Salesforce, ServiceNow, and Dynamics 365. In this session, you’ll gain a detailed understanding of how your organization’s data—and the insights generated from it—are managed in a secure and compliant manner.
Explore how Copilot connects to external systems like Salesforce and ServiceNow, as well as internal Microsoft services such as Microsoft Graph and SharePoint, to unify data from across your enterprise. You'll learn how insights data is stored, how we address GDPR and other key privacy requirements, and how insight structures are linked to your existing systems of record. We'll also cover how these insights can be extended to support your organization’s unique needs.
If you’re responsible for data architecture, security, compliance, or AI adoption in your organization, this session will provide the foundational knowledge to help you deploy Copilot for Sales confidently and responsibly.
Friday 12:30 PM - 1:40 PM · Room 345-346
Avoiding the "Grey Box of Death": Automatically Checking For Broken Visuals in Power BI
Have you ever updated a semantic model/dataset and didn't realize it broke a visual in a Power BI report? Have you ever seen that "grey box of death" (or your customers call you about it) after making an update to Power BI? In this session, I demonstrate a way to combine Microsoft Playwright and Azure DevOps to automatically test for broken visuals and notify you about those issues.