Monday 9:00 AM - 5:00 PM · Room 320-321
Azure Cosmos DB for Developers: From Basics to AI

Hasan Savran
Microsoft MVP, Owner of SavranWeb ConsultingS, Sr. Business Intelligence Manager at Progressive Insurance
This workshop is perfect for any developer eager to explore how to integrate Azure Cosmos DB into their applications! We’ll dive into the ins and outs of Azure Cosmos DB, helping you gain a thorough understanding of its architecture, features, and handy tools. We’ll also cover essential concepts like partitioning and data modeling for distributed NoSQL databases, making sure you feel confident in working with this powerful technology.
The workshop will include an in-depth look at all the database services offered by Azure Cosmos DB, with a primary focus on the SQL API. We will utilize the Azure Cosmos DB Emulator as much as possible, so participants may not need an Azure subscription for most of the workshop. Additionally, attendees will learn how to use the Azure Cosmos DB Data Migration Tool to migrate data from various sources into Azure Cosmos DB.
Azure Cosmos DB provides a range of AI capabilities via Azure AI Foundry. In the workshop's concluding section, participants will explore AI features and vector data options available in Azure Cosmos DB.
Participants are welcome to join the workshop with or without their computer, as there will be valuable learning opportunities regardless of whether a computer is used.
Monday 9:00 AM - 5:00 PM · Room 340-341
Advanced Data Protection Strategies with SQL Server: A Hands-on Workshop
This workshop equips developers and database professionals with the practical skills needed to implement robust data protection in their SQL Server environments. It creates systems where sensitive information remains secure even from privileged users like database administrators.
Through structured, hands-on exercises, participants will:
Discover and Classify sensitive data using SQL Server's built-in tools—establishing the foundation for a comprehensive security strategy
Implement Column-Level Protection through Dynamic Data Masking and Always Encrypted, including advanced scenarios with secure enclaves that maintain query performance while protecting data confidentiality
Deploy Row-Level Security to enforce granular access controls based on user context, ensuring proper data isolation and visibility
Establish Audit Trails to monitor and track all interactions with classified data, supporting compliance requirements and enabling security incident investigations
Throughout the day, participants will progressively build a secured database and a .NET web application, integrating each security feature into a cohesive solution. By the session's end, attendees will have developed a proof-of-concept that demonstrates enterprise-grade data protection techniques that satisfy security requirements while maintaining application functionality.
Monday 9:00 AM - 5:00 PM · Room 342
Deep Dive: Building a framework for orchestration in Azure Data Factory
Finding the balance between cost, efficiency and performance for cloud-based ETL processes can be a tricky proposition, and a good ETL framework will help you get there.
In this all-day session we will take a deep dive into what the components of such a framework may look like in Azure Data Factory, and why orchestration can be a good option if you're trying to minimize cost. We will also do an in-depth review of an actual framework I've developed and use today, which you can use as a starting point for your own efforts.
If you are currently using Azure Data Factory and feel like you're recreating the wheel all the time, or if you're planning to move from SSIS to ADF in the future and would like some ideas on how to create an ETL framework, this will be the perfect training day for you.
Monday 9:00 AM - 5:00 PM · Room 343
A Comprehensive Guide to Direct Lake for the Pro Data Modeller
Join us for a workshop designed specifically for Pro Data Modellers. This comprehensive session will guide you through all the critical elements needed to build, tune, and maintain a Direct Lake Model in Microsoft Fabric.
Throughout the workshop, you will delve into essential topics such as: -Understanding the prerequisites for setting up your model. -Exploring the anatomy of Parquet files and their role in data storage. -Mastering transcoding and framing techniques to optimize data processing. -Implementing SQL fallback strategies for enhanced reliability. -Discovering new features and how they can benefit your projects. -Ensuring robust security measures to protect your data. -Fine-tuning performance to achieve optimal efficiency. -Navigating the migration process with ease. -Tackling advanced topics to elevate your modelling skills.
By the end of this workshop, you will have gained the knowledge and skills necessary to effectively manage a Direct Lake Model, ensuring your projects are both efficient and secure.
Monday 9:00 AM - 5:00 PM · Room 344
Database Administration for the Non Database Administrator
In this all day session on Microsoft SQL Server we will be learning about how Microsoft SQL Server works, both in Azure and on-premises, and what needs to be done to keep it up and running smoothly when you don't have a full-time database administrator on staff to help you keep it running.
In this session, we will cover a variety of topics, including backups, upgrade paths, indexing, database maintenance, database corruption, patching, virtualization, disk configurations, high availability, database security, database mail, anti-viruses, scheduled jobs, and much, much more.
From a product perspective, we'll examine SQL Server in an Azure VM, Azure SQL DB, and Azure SQL DB Managed Instance, as well as the on-prem options, including SQL Server on-premises options, including Azure Arc hosted deployments.
After taking this full-day session on SQL Server you'll be prepared to take the information that we go over and get back to the office, get the SQL Server's patched and properly configured so that they run without giving you problems for years to come.
Tuesday 9:00 AM - 5:00 PM · Room 320-321
Intro to T-SQL Data Manipulation Language
From early database management systems to modern data platforms like Microsoft Fabric, SQL has withstood the test of time. This language can be powerful in its navigation of data relationships, calculation of detailed or aggregate values, or adjustment of records stored in tables. All of these capabilities fall under the term Data Manipulation Language (DML). This language can enable many data practitioners, from engineers and citizen developers, to harness their data and bend it to their needs.
Suppose you have not had the opportunity to learn SQL DML and would find use for your work. This introductory course to Microsoft's version of SQL, called Transact SQL or T-SQL, is a great place to start. Join this workshop to learn T-SQL DML from the ground up, starting with the SELECT statement and all of its primary clauses.
Tuesday 9:00 AM - 5:00 PM · Room 343
Data Science Jump Start using Microsoft Fabric
Data scientists can manage data, notebooks, experiments, and models while easily accessing data from across the organization and collaborating with their fellow data professionals using Microsoft Fabric.
In this module, you'll learn how to understand the data science process in Fabric, train models with notebooks in Fabric and track model training metrics with MLflow and experiments.
Tuesday 9:00 AM - 5:00 PM · Room 348
Execution plans explained
You probably have some tricks up your sleeve for dealing with slow queries. Index the columns in the join and where. Rewrite the WHERE clause to enable index usage. Tinker with the join order, or perhaps even break up the query in smaller parts. Those tricks work. Sometimes. Not always. And when they don't, your job suddenly gets frustrating!
Sometimes, you wish you knew WHY a query is slow. So that you can target your changes exactly right, at precisely the root cause of the slowness. And the good news is, there already exists a way to find the root cause of bad performance. You "only" need to learn to work with execution plans
In this full-day workshop, you will learn everything you need. You will learn what execution plans are, and where you can find them. You will learn the basics of how to read execution plans. And you will learn all you need about all of the commonly encountered operators in execution plans: what their function is, how they operate, and what effect that has on performance.
In short: After attending this workshop, you will know how to obtain an execution plan for a slow running query, and you will know how to look at that plan and find the spot where it hurts, so that you know what to do to fix the performance issue.
Regardless of whether you have never seen an execution plan before, or whether you already have experience working with execution plans, this workshop will teach you how to look at an execution plan and then KNOW why your query is slow ... and what you can do to fix that!
Wednesday 8:30 AM - 9:40 AM · Room 342
Copilot in Fabric - AI Data Science Help Tips and Tricks
Copilot and other generative AI features bring new ways to transform and analyze data, generate insights, and create visualizations and reports in Microsoft Fabric and Power BI. Come see how to take advantage of Copilot in Fabric, what it can do and how to make sure it's enabled.
Wednesday 8:30 AM - 9:40 AM · Room 344
Microsoft Fabric: Lessons from Year 1
As of November 2023, Microsoft made the announcement that Microsoft Fabric was Generally Available. In the time since, many organizations jumped into the exciting world of this emerging technology.
Any fabric, technical or not, needs to be well maintained to retain its initial quality and usefulness. In the same way, your approach to implementing Microsoft Fabric could either set yourself up for a successful, long-term implementation, or one that needs a rebuild in 1-2 years.
In this session, I'll walk through features of Fabric that have lived up to the hyped labels and work as prescribed. I'll also provide demos challenges I've faced with the product and how I've worked through them. Finally, I'll recommend routine activities that should be performed on any Fabric environment to keep it nice and clean.
(If this wasn't enough Fabric references and you need more, check out this session for even more puns!)
Wednesday 10:20 AM - 11:30 AM · Room 320-321
Fabric Data Factory: What's New and Roadmap
In this session, you will learn the exciting product innovations and roadmap for Fabric Data Factory. You will learn how Fabric Data Factory provides industry-leading data movement, transformation and orchestration capabilities. You will learn how AI-powered development experiences will enable you to be more productive and effectively build data integration solutions.
Wednesday 10:20 AM - 11:30 AM · Room 342
Accelerate Intelligent App Development with SQL Database in Microsoft Fabric
Discover the new SQL database in Microsoft Fabric, where seamless setup and an integrated development environment enable you to quickly leverage the power of an AI-driven data platform. Provision and deploy an autonomous database with built-in security in seconds, featuring automatic setup. In this session, we'll delve into the potential of SQL database in Fabric, which combines the enterprise-scale features and capabilities of the Azure SQL Database engine with the autonomous management and ease-of-use advantages of the Microsoft Fabric data estate. We'll uncover core functionalities, best practices, and real-world strategies to integrate your database with other Fabric workloads, creating a tailored, high-impact data stack that empowers your solutions to thrive within the Fabric ecosystem.
Wednesday 10:20 AM - 11:30 AM · Room 340-341
Generate self-service governance dashboard using Microsoft Fabric and Purview
- Data Ingestion & Integration Utilize Microsoft Fabric’s OneLake to unify governance data from Purview, M365, and third-party data sources. Implement Synapse Data Engineering to transform governance metadata into structured insights.
- Governance Metrics & Compliance Monitoring Define key governance KPIs, including: * Data Quality Scores (accuracy, completeness, consistency). * Compliance Status (GDPR, HIPAA, industry-specific regulations). * Data Lineage & Sensitivity Labels (metadata classification). * User Access & Data Usage Trends.
- Self-Service Dashboard with Purview Integrate Purview with Fabric for real-time metadata visualization. Design interactive dashboards in Power BI with drill-down capabilities for governance teams. Enable role-based access control for different user personas (CIO, Data Stewards, Compliance Officers).
Wednesday 10:20 AM - 11:30 AM · Room 343
Indexing Internals for Developers & DBAs
What are the secrets to making your queries run faster? Why does SQL Server use an index for some queries and not for others? What makes a good index? How many indexes should I have? Have you ever asked these questions? When you want to understand an application you look at its core architecture. Underneath the covers SQL Server is just a C++ application. Together we will discuss how the application architecture of SQL Server works, and how to apply this logic to building the best indexes for your queries.
Wednesday 10:20 AM - 11:30 AM · Room 347
Deployments aren’t enough – databases deserve a development process
How quickly can you take your database and examine its code? Database-as-code is not a new concept, but all too often we focus only on being able to apply changes to the database instead of having a development process that ensures that we’re making good database changes. A more wholistic development process offers early warnings of antipatterns via code analysis and increases our confidence even on mature databases with deployment “practice runs” and unit tests. In this session we’ll discuss the components of a database development cycle through the lens of Microsoft.Build.Sql projects and what capabilities we should expect to deliver database object updates easily and more reliably.
Wednesday 10:20 AM - 11:30 AM · Room 348
Getting started with SQL database in Fabric
Microsoft Fabric, the unified data platform, now includes an operational database solution and it is SQL! Come learn all the fundamentals of how this solution is the same and different from other SQL deployment options. We will also show you the value of using SQL in the Fabric ecosystem including developer experiences, automation, monitoring, and integration for AI applications.
Wednesday 2:00 PM - 3:10 PM · Room 320-321
Building a framework for orchestration in Azure Data Factory
Finding the balance between cost, efficiency and performance for cloud-based ETL processes can be a tricky proposition, and a good ETL framework will help you get there.
In this session we will look at what the components of such a framework may look like in Azure Data Factory, and why orchestration can be a good option if you're trying to minimize cost. We will also walk through an actual framework I've developed and use today, which you can use as a starting point for your own efforts.
If you are currently using Azure Data Factory and feel like you're recreating the wheel all the time, or if you're planning to move from SSIS to ADF in the future and would like some ideas on how to create an ETL framework, this will be the perfect session for you.
Wednesday 2:00 PM - 3:10 PM · Room 342
Build an end-to-end data solution with Microsoft Fabric
In this session, we will explore how to leverage Microsoft Fabric to create an end-to-end data solution. From leveraging Data Factory for data movement and orchestration, Notebooks to assist with data cleansing and further transformation, along with various engines such as Lakehouse and Power BI to complete the story. We look at how to get your data, get it into a usable form and possibly even reusing that data without having to copy with the end goal of providing reports and visualizations. We'll also look at other benefits where you can leverage the data for a complete solution.
Wednesday 2:00 PM - 3:10 PM · Room 348
Transform your business with integrated solutions using SQL database in Microsoft Fabric
The promise of SQL Database in Fabric is that it is simple, autonomous, and optimized for AI - but what does that mean for you and your organization? In this session we will explore how various personas can benefit from a deeply integrated operational database in Fabric. Discover how customers are using SQL Database in Microsoft Fabric today, and how Fabric databases can help you build innovative solutions in the age of AI, faster and easier than ever.
Wednesday 4:00 PM - 5:00 PM · Room 342
Fabric houses, when to go for Lakehouse or Warehouse (or both)?
Should you keep your data in the Fabric Lakehouse or the Warehouse? What are the pros and cons? In Fabric, all and any data is kept in the OneLake, and it is processed by one of the Fabric’s data engines. Choosing between Lakehouse and Warehouse (or both) is at the same time a strategic and performance impacting decision. • In this demo oriented session you will learn: o Their distinctions and particular use cases o When they overlap o How to use them together o When not to use one of them (or either)
Wednesday 4:00 PM - 5:00 PM · Room 344
Data Governance with Microsoft Purview and Microsoft Fabric
Managing data across large organizations can feel like an endless game of catch-up. Microsoft Purview and Fabric make it easier by giving teams tools to organize, protect, and share data without constant compliance headaches.
First, we’ll look at the challenges organizations face when data is scattered across silos with possible security gaps slowing everything down and creating possible regulatory concerns.
Next, we’ll see how Purview and Fabric work together to streamline data management by cataloging assets, enforcing protections, and making data easier to find.
Finally, we’ll show how this approach simplifies compliance, improves data accessibility, and supports better collaboration across teams.
Attendees will come away with a practical guide on using Microsoft Purview and Fabric to keep data organized, compliant, and useful.
Thursday 8:25 AM - 9:35 AM · Room 345-346
Harness the Power of Microsoft Fabric and Notebooks
Microsoft Fabric gives us a notebook experience unlike any other previous Microsoft product. The power of notebooks is immense. Sure you can use data from your Lakehouse, Data Warehouse, or OneLake but what about PYODBC? Can we connect to a relational database without a Data Pipeline or a Data Flow? Could we just download a file from Kaggle or Github and start using Data Wrangler? Could we use Beautiful Soup to scrape data, load it into a Pandas data frame, and begin working with it? Can we invoke OpenAI models using GPT to glean new insights into our data? Yes, Yes, Yes, Yes, and Yes. Yes we can, and in this session you will learn how to harness the power of Microsoft Fabric Notebooks.
Thursday 10:20 AM - 11:30 AM · Room 320-321
SQL Server and AI, tomorrow has arrived
Applications in need of modernization and integration with AI usually pick up an AI broker to bridge the gap. Microsoft SQL Server 2025, still in private preview, as announced interesting features to implement this bridge and help developers to get AI results in existing applications faster. In this session will you learn about these features and how they will help infuse AI in current apps.
Thursday 10:20 AM - 11:30 AM · Room 347
Hold my beer; I know how to fix this with Copilot!

Hasan Savran
Microsoft MVP, Owner of SavranWeb ConsultingS, Sr. Business Intelligence Manager at Progressive Insurance
Many proof-of-concept AI applications fail to reach production because industries do not find value in copilot-like applications. Companies seek clarity and focus, not an overwhelming barrage of chat applications bombarding decision-makers with countless suggestions or summarizations. This session will help you connect the dots in the AI puzzle using Microsoft technologies, including the new open-source database, DocumentDB. We will examine vector stores, the RAG pattern, and multi-agent frameworks and learn how to implement these technologies in applications. By the end of this session, you will have all the practical information necessary to integrate AI features into your projects.
Thursday 10:20 AM - 11:30 AM · Room 348
10 Free SQL Databases: Your Playground for AI, Advanced Analytics, and Next-Gen Applications!
The Azure SQL free database offer is bigger than ever—now providing 10 free databases per subscription for life! This means unlimited opportunities to build, analyze, and innovate without worrying about costs.
In this session, you'll explore how to:
- Unlock AI-driven insights by running ML models, anomaly detection, and intelligent recommendations directly in SQL 2.Power real-time analytics by processing and visualizing data streams for predictive forecasting and decision-making 3.Automate workflows and optimize data pipelines with SQL-based automation and event-driven processing 4.Enhance BI reporting and dashboarding with seamless integrations into Power BI and other analytics tools 5.Develop enterprise-grade applications, including AI-powered customer insights, fraud detection, financial forecasting, IoT telemetry, and marketing personalization 6.Set up multi-database environments for development, experimentation, and CI/CD pipelines—all without affecting production
Whether you're a developer, data scientist, or analytics enthusiast, this session will show you how to harness the full power of SQL for AI and analytics—with zero cost and limitless possibilities.
Join us and take your data-driven applications to the next level!
Thursday 12:40 PM - 1:50 PM · Room 320-321
Choosing the Right Data Store--An Overview of Azure Data Platform Choices
There are several different data platform solutions for use within your application. Selecting the right option can make the difference between a well-performing application and a poorly performing one; not to mention the cost aspect of choosing the wrong solution.
In this session we'll look at the options of Azure SQL Database, Azure SQL Database Managed Instance, and Cosmos DB to see when these are all going to be the correct option, and when these aren't going to be the right option; both from a price and performance perspective.
Thursday 12:40 PM - 1:50 PM · Room 342
Transform Your Business with Real-Time Intelligence: Microsoft Fabric Meets Dynamics 365
Abstract: Imagine your Dynamics 365 data not just as a record-keeping system, but as a live, actionable resource driving real-time decisions across your organization. In this session, we’ll dive deep into how Microsoft Fabric enhances Dynamics 365 with real-time intelligence, enabling you to unlock new opportunities for operational efficiency and innovation. We’ll start by demonstrating how to implement event sourcing to capture and process changes in Dynamics 365 in a real time fashion way, integrating into Microsoft Fabric’s powerful analytics and data integration capabilities. Using real client success stories, you’ll see how these patterns enable instant updates, real-time reporting, and proactive decision-making. You’ll also experience live demos showcasing the full lifecycle of real-time data integration—from capturing events in Dynamics 365 and transforming them in Fabric pipelines, to visualizing actionable insights in Power BI and bringing them back into Dataverse to empower your teams.
In this session you will learn how event sourcing and modern integration patterns bring data to life, through real client success stories and live demos.
What You’ll Learn:
• How to set up real-time data pipelines between Dynamics 365 and Microsoft Fabric.
• Practical event-sourcing patterns for building seamless, scalable integrations.
• Techniques to surface actionable insights back into Dynamics 365 and Dataverse to empower decision-making.
• Best practices for operationalizing real-time intelligence with minimal disruption to your existing architecture.
Why Attend:
You’ll leave this session with a step-by-step understanding of how to create real-time intelligence solutions, backed by practical examples and real-world use cases. Whether you're a data professional, architect, or Dynamics 365 expert, this session will give you actionable tools and insights to take back to your organization.
Post-Session Outcomes:
• Start building your first real-time integration using Microsoft Fabric and Dynamics 365.
• Access resources, templates, and best practices shared during the session to accelerate your implementation.
• Connect with like-minded professionals to collaborate and share ideas.
Call to Action:
Don’t just learn—act! After the session, you’ll be encouraged to apply what you’ve seen, with resources and next steps to kickstart your real-time intelligence journey. Let’s turn your data into a strategic advantage!
Thursday 12:40 PM - 1:50 PM · Room 343
A Query Runs Through It: An Introduction to the SQL Server Engine
Have you ever wondered what happens inside SQL Server when you execute that query you wrote? This session will serve as an introduction to what is going on under the hood, commonly called SQL Server Internals. Whether writing queries or tuning them, SQL Server internals knowledge is highly valuable in Azure VMs or SQL DB, AWS, GCP, and on-premises as the SQL Server engine is the same. Together we will dip into why data types matter, ponder pages, sample the storage engine, and ponder the query processor as we see what happens when your query runs.
Thursday 12:40 PM - 1:50 PM · Room 344
Power BI Storage Modes: The Ultimate Showdown
Power BI offers three different storage modes for data: Direct Query, Import and Direct Lake. Each of them has its own advantages and disadvantages, depending on the scenario and the requirements. But which one is the best overall? How do they compare in terms of performance, scalability, flexibility and ease of use?
In this session, we will put the three storage modes to the test in a series of challenges inspired by the Olympic Pentathlon. We will use real-world data sets and scenarios to measure how each storage mode handles different aspects of data analysis and visualization. We will also share some best practices and tips on how to choose the right storage mode for your project.
By the end of this session, you will have a better understanding of the strengths and weaknesses of each storage mode, and you will be able to decide which one deserves the gold medal in your Power BI dashboard.
Thursday 12:40 PM - 1:50 PM · Room 345-346
Avoid Data Silos! Best Practices for Implementing Shared Semantic Models
In this presentation, we will explore how to create effective shared semantic models in Power BI and how to manage them in the Power BI Service. Shared semantic models can help reduce the cost and complexity of fragmented data, also known as data silos, within an organization. By using shared semantic models, developers can save time and resources by only having to maintain a single semantic model instead of multiple unique ones. Additionally, shared semantic models can prevent discrepancies and ensure consistent logic across reports. We will also cover how to configure semantic models for enhanced user experiences, enable row level security (RLS) to protect sensitive data, publish semantic models for optimal sharing and distribution, and promote or certify semantic models for increased exposure.
Thursday 12:40 PM - 1:50 PM · Room 347
Unleash the Power of SQL Database in Fabric: Innovate Without Limits Using the Free Trial
Curious about SQL Database in Fabric but unsure where to start? This session is your gateway to limitless innovation—completely free. Discover how to leverage the Fabric free trial to explore serverless computing, real-time analytics, and AI-powered insights—all without cost or commitment.
We’ll walk you through hands-on scenarios, best practices, and real-world workflows that showcase the full potential of SQL Database in Fabric. Learn how to automate processes, optimize performance, and seamlessly integrate SQL with other Fabric components to drive efficiency and scale effortlessly.
Whether you're a developer, data engineer, or tech innovator, this session will give you the tools to experiment, build, and unlock new capabilities—risk-free. Don't miss this opportunity to transform your data solutions and bring your ideas to life with zero barriers, zero cost, and endless possibilities.
Thursday 12:40 PM - 1:50 PM · Room 348
Azure SQL DB Hyperscale: The Definitive Modern Database Choice
Join us at DataCon to explore the Azure SQL Database Hyperscale. We'll discuss its architecture and use cases, highlighting benefits like larger database sizes, faster throughput, and continuous priming. Learn how these advancements boost scalability, speed, and reliability in data management. Stay ahead with the cutting-edge features of Azure SQL Database Hyperscale. We'll dive into real-world use cases and innovations that demonstrate the power of Azure SQL in driving the next generation of databases. Discover how organizations are leveraging Hyperscale for mission-critical applications, and how vector support enhances performance for complex queries and AI workloads. Experience firsthand the transformative impact of Azure SQL Database in various industries and understand why it is the preferred choice for modern data solutions.
Thursday 2:00 PM - 3:10 PM · Room 342
Data Processing Architecture: Key Design Principles & Considerations
In the era of big data, the design of data processing architecture is crucial for efficient data management and analysis. This presentation explores the fundamental principles and considerations essential for constructing robust data processing systems. Key design principles such as scalability, reliability, security, and flexibility are examined in detail.
The architecture's ability to handle varying data flows, ensure data integrity, and maintain security across multiple stages is emphasized. Additionally, the presentation discusses various architectural patterns, including data warehouses, data lakes, and data flow pipelines, highlighting their respective use cases and benefits.
Furthermore, the presentation contrasts traditional data processing architecture with the emerging concept of data mesh. While traditional architectures focus on centralized data processing and transformation, data mesh advocates for a decentralized approach, promoting domain-oriented data ownership and self-serve data infrastructure.
This comparison underscores the shift from monolithic data management to a more flexible and scalable architecture, addressing the diverse needs of modern data-driven organizations.
By adhering to these principles and considerations, data engineers can create systems that not only meet current data processing needs but are also adaptable to future technological advancements and data requirements.
Thursday 2:00 PM - 3:10 PM · Room 340-341
Working with OAuth 2.0 APIs in Azure Data Factory
Working with APIs can be tricky, and even more so when it's an OAuth 2.0 API. Add to that an ETL platform and automation, and you now have a perfect storm that's pretty difficult to navigate.
This session is about my journey with OAuth 2.0 APIs while trying to extract my own financial data, how I struggled with the authorization flow and how it finally started making sense.
We'll talk about what an OAuth 2.0 API is, and why they are so difficult to deal with when your tool of choice is an automated ETL platform. After that we'll take a closer look at the steps to develop an ADF pipeline that extracts data from an OAuth 2.0 API, and review some tools that can help you throughout the development process.
Thursday 2:00 PM - 3:10 PM · Room 343
Migration Mystery Solved: Moving SSRS and SSAS to Power BI
Management has come to you and said that now is time to migrate your SSRS and SSAS assets to Power BI. Now you’re wondering about things like: where do I start? Is there planning that can be done to de-risk the migration? Are there tools that could help me with migration? Good News! There are places to start, there is planning that can be done, and yes, there are tools that can help with migrations. We will also discuss P SKU to F SKU migration as a lot of customers are facing that challenge now as well.
During this session we are going to guide you through the migration process by covering: • Assessing your current environment • Planning your migration strategy • Selecting the right tools • Executing the migration in phases • Validating and optimizing the migration and its processes
Thursday 2:00 PM - 3:10 PM · Room 347
Unified DevOps for Microsoft Fabric, Azure SQL, and SQL Server with next-gen SQL projects
With Microsoft.Build.Sql SDK-style SQL projects, your database objects are stored as code for seamless development in Microsoft Fabric and client tools like VS Code and Visual Studio, but the advantages don’t stop there. The modernized SQL projects format backs Fabric’s git integration and deployment pipelines for Data Warehouse and SQL database, providing interoperability with extended CI/CD capabilities and your existing DevOps investments for SQL Server, Azure SQL Database, and Synapse Data Warehouse. SQL project’s code analysis and other build-time tests validate database code quality and correctness during continuous integration of code changes. With SQL projects delivering database object updates is easier and more reliable whether you're managing one database or a fleet of databases because the deployment plan is dynamically calculated through the SqlPackage CLI. In this session we’ll learn how to leverage the Fabric experiences for database DevOps in addition to the depth of capabilities from SQL projects such that we can efficiently develop and deploy database changes with source control integration, all with the tools you love.
Thursday 4:00 PM - 5:10 PM · Room 320-321
Rethinking Databases: Blockchain for Security, Trust, and Transparency
In an era where data breaches, fraud, and lack of transparency plague traditional databases, blockchain emerges as a revolutionary solution for secure and trusted data management. This session explores how blockchain technology is transforming the way we store, access, and verify data—eliminating single points of failure, enhancing security, and ensuring transparency like never before.
I'll dive into the core principles of blockchain that make it a superior alternative to traditional databases, from immutability and cryptographic security to decentralized trust mechanisms. Attendees will gain insights into real-world applications across industries such as finance, healthcare, and supply chain, showcasing how blockchain is redefining data integrity and ownership.
Whether you’re a developer, business leader, or tech enthusiast, this session will equip you with the knowledge to understand why blockchain is the future of secure and transparent databases—and how your organization can leverage it for a more trusted digital ecosystem.
Thursday 4:00 PM - 5:10 PM · Room 343
SQL Server 2025: The Enterprise AI ready database
Come learn about the latest information for SQL Server 2025, now in preview. You will learn how to bring AI to your data with AI applications using built-in vector capabilities ground to cloud. In addition, you will learn about new enhancements for developers including JSON, RegEx, REST API, GraphQL, Change Streaming. You will also learn about all the new engine features for security, performance, and availability. You will also see how to integrate your SQL Server 2025 experience using the new SSMS21 and SSMS Copilot.
Friday 9:00 AM - 10:10 AM · Room 342
Building a Data Culture
Data is a core part of our lives that influences all parts of business, from how we implement technology, to our establishment of business processes, all the way to individuals themselves. We solve tech and business process problems every day, but do we solve people problems?
How are people problems solved? With company culture. And culture can make or break data projects as easily as tech and process problems.
In this session, we'll talk about a company culture's influence on their ability to be data driven. We will identify common pitfalls that can influence how well your organization will be able to use data. We will also identify ways you can identify, and even measure, the data culture in your organization.
Friday 9:00 AM - 10:10 AM · Room 340-341
The Power of Semantic Layers: Ensuring Reliable and Governed BI
Developers often create excellent reports, only to be asked for data exports to Excel.
Ultimately, users want only one thing: easy access to data that helps them do their job. While Power BI reports and dashboards are powerful, they can't answer every question. Users may need to create their own reports, build Excel pivots, or extract data for other processes. These needs must be met securely and consistently, avoiding governance bypass, duplicated calculations, or compromised security.
Enter the Semantic Layer. The semantic layer connects Power BI and Microsoft Fabric back-end systems to end users, offering secure, governed, and user-friendly data access. Certified semantic models means that the content meets the organisation's quality standards and can be regarded as reliable, authoritative, and ready for use across the organisation.
This session will cover:
What is a Semantic Model? Define the semantic layer, its components, and its role in self-service BI.
The Certification Process Explore the structured, repeatable steps required to certify semantic models. This introduces the certification process map and checklist - a list of best practices a model requires to pass to be certified.
Deep dive into the Certification Checklist Learn some common best practices, optimization tips, security considerations, and practical advice for building robust semantic models.
Who Should Attend?
This session is for Data Analysts, Data Engineers, and BI Developers aiming to upgrade models to meet enterprise business and self service needs.
Led by Steve Campbell, a Microsoft MVP and co-owner of a Microsoft data consultancy. Steve previously led data analytics for large-scale EMEA platform implementations.
Key Takeaways
- Understand the Role of Semantic Models.
- Learn the Certification Process and a repeatable framework for ensuring your models meet organisational standards.
- Take away practical tips for optimisation, security, and best practices.
Friday 9:00 AM - 10:10 AM · Room 347
Building Your First CRUD App using Dataverse for Teams with Copilot
We'll learn how to leverage Copilot and AI to quickly turn an Excel table into a Dataverse schema, with data and a fully functional low-code app using the assistance of AI and Copilot Studio.
Microsoft Dataverse for Teams delivers a no code / low code out of the box data platform for Microsoft Teams. You get relational data storage, rich data types, enterprise-grade governance, and one-click solution deployment. Dataverse for Teams enables everyone to easily build and deploy apps. Come see what Dataverse for Teams is all about and how to get started building databased apps within Teams.
Friday 10:20 AM - 11:30 AM · Room 320-321
Make your solution sparkle with the medallion architecture
As a data engineer you spend a lot of time transforming data into data models that in turn will be used to provide insights to your organization. As the lines between the technical and business roles become more and more blurred it is even more important to establish a solid data architecture.
In this session we will demystify the medallion architecture and its different layers. We will showcase how you can use it in your solutions like Azure Databricks or Microsoft Fabric as well as digging into the benefits of using a non-technical approach when collaboration between different data roles increases even more.
If you are a data engineer, a data analyst, a business analyst or a business representative consuming analytical data, this session is for you. You will by the end of this session identify how your data flows through the layers of the medallion architecture and what data to use for different use cases.
Friday 10:20 AM - 11:30 AM · Room 343
Performance tuning for Azure Cosmos DB

Hasan Savran
Microsoft MVP, Owner of SavranWeb ConsultingS, Sr. Business Intelligence Manager at Progressive Insurance
Azure Cosmos DB is a fully managed database service, freeing developers from database management tasks. However, as a developer, you still have important responsibilities, such as changing indexing policies, configuring connections, estimating workloads, and selecting the right throughput options. All of these tasks have a direct impact on the performance and cost of your application. To keep your application running smoothly and fast, we'll explore the .NET/SDK settings, connection types, and indexing types. We'll also focus on selecting the right throughput options, using Query Execution Metrics and server-side programming. Join me as we explore how to optimize your Azure Cosmos DB solutions for the best performance.
Friday 10:20 AM - 11:30 AM · Room 347
AI and SQL ground to cloud to fabric
New to AI? Come learn the fundamentals of how to get started with AI and Microsoft SQL everywhere it exists: ground to cloud to fabric. This includes SQL Server 2025, Azure SQL, and SQL database in Fabric. SQL is the perfect place to integrate data with AI because of its industry proven security, scalability, and availability. We will show GenAI capabilities like vector search, how to integrate these with your application, and Copilot experiences everywhere SQL exists including GitHub Copilot.
Friday 12:30 PM - 1:40 PM · Room 344
Build a Robust App with Fabric SQL Database, GraphQL API, and User Data Functions
Discover how to harness the full potential of Fabric to build powerful modern data-driven applications. In this session participants will learn how to use the API for GraphQL and User Data Functions over SQL databases and other data sources in the Microsoft Fabric Platform to build modern “CRUD” (Create, Read Update and Delete) data APIs that you can immediately call from an application. APIs for GraphQL enable you to effortlessly connect your Fabric data with your applications, with only a few clicks. If you have more complex business logic for your data, you can implement it with Python User Data Functions. With these two powerful building blocks and a SaaS-ified, developer-friendly experience, your app will be up and running in no time!
Friday 12:30 PM - 1:40 PM · Room 347
Build AI Apps Smarter: Optimize SQL Database Costs & Performance in Fabric
Take your AI apps to the next level with cost-smart SQL database optimization in Fabric. This session explores how to maximize performance while minimizing costs by leveraging Fabric's capacity monitoring tools for SQL databases. Learn to track resource usage, identify inefficiencies, and optimize database performance—all without exceeding your budget. Designed for app developers, architects, and innovators, this talk will provide real-world strategies to build scalable, high-performing AI apps powered by intelligent SQL database cost management. Discover how to turn insights into action and supercharge your AI applications with Fabric
Friday 12:30 PM - 1:40 PM · Room 348
Unleashing Modern Data Warehousing: Architecture, Insights & Future Innovations in Fabric Warehouse
Join us for a deep dive into the modern data warehouse! We’ll break down the core principles, use cases, and architectural intricacies. Discover how innovations like Data Virtualization, enhanced Data Modeling experiences, AI-powered insights with Copilot, and more are helping developers and customers unlock the full potential of Fabric Warehouse. This session is packed with live demos, best practices, and an exciting glimpse into the future roadmap of Data Warehousing.
Friday 1:50 PM - 3:00 PM · Room 340-341
Realtime product review data analysis for Retail using Fabric RTI, Purview and Azure Open AI
This session focuses on leveraging Microsoft Fabric's Real-Time Intelligence and Generative AI capabilities to monitor and analyze streaming customer review data in retail. Participants will learn how to:
- Ingest and process streaming data from customer reviews in real-time.
- Utilize AI agents to detect anomalies in sentiment trends and unusual patterns.
- Generate insights using Microsoft Fabric’s Real-Time Intelligence, Co-Pilot Studio, and ML notebooks.
Friday 1:50 PM - 3:00 PM · Room 344
Mirroring in Microsoft Fabric - Overview and Roadmap
Microsoft Fabric accelerates data potential for the era for AI. Mirroring simplifies linking of external databases into Fabric, with full replicas created with just a couple of clicks. Once a database is mirrored, real-time updates will automatically be replicated into OneLake and stored as Delta Parquet, an analytics-ready format that works seamlessly for every analytics workload in Fabric. Join us to gain deeper insights and learn how to get started with Mirroring by bringing data gravity to unlock powerful insights in Fabric.
Friday 1:50 PM - 3:00 PM · Room 347
Introduction to SQL Server Essential Concepts
When I first started learning about SQL Server, really deeply learning, there were a few “key” concepts that you hear repeated often by top speakers and SQL MVP’s. Internals, recovery models, and backups. They are interconnected. As the learning continued, it was self-evident how understanding basic data internals with pages, extents, and allocation bitmaps or database recovery models, the transaction log, and VLF’s or advanced backup options backups like stripping and piecemeal restores affected the uses of SQL Server. They affected not just SQL Server but the way you make decisions in order to determine how best to use SQL Server to support your business. This session enables you to have that core set of understanding required for advanced SQL learning.