Quantcast
Channel: Category Name
Viewing all 5264 articles
Browse latest View live

Intelligent Product Search and Recommendations for Fashion Retail

$
0
0

The ranking techniques used in Bing Ads can help fashion retailers display more relevant product ads in a cost-effective way which may also be more attractive for shoppers. By working with a prominent fashion retail partner, these techniques provided a 5.9% increase in the partner’s click through rates (number of times a product is clicked over the number of times it is shown)

These techniques can be extended beyond basic ad serving into other scenarios on a fashion retailer’s website. When shoppers use the search box on the fashion retailer’s website, these techniques can improve product search result relevance. Moreover, search results can be tailored to an individual shopper’s fashion taste by integrating additional permissioned signals such as individual purchase behaviors and search history. These signals can also be used in product recommendations to rank products by default if the search query box is not used.

Scenario: Maximize product ad relevance and clicks on Bing Ads

In search advertising, keywords specified by the advertiser are traditionally matched with the query and product description to rank relevant product ads. However, in the fashion industry, it is costly for advertisers to manually specify and bid on all possible fashion attribute keywords. Since an advertiser’s keywords determine how product ads are ranked, the potential to maximize relevance and thus user clicks are limited by the selection of those keywords.
 
For example, when a Bing user wants to search for dresses on sale from a fashion brand name, she may issue a query such as "<brand name> dresses on sale". Consequently, the advertiser’s initially specified keywords "<brand name>" and "dresses" will match a huge number of potentially relevant dresses from the fashion brand name.
 
However, dress descriptions may contain many other fashion attribute words like “floral”, “jacquard” or “bodycon” that could closely correlate with the relevance of a given product to the query but were neither specified by the advertiser nor typed in the query by the user. Therefore, it is difficult to identify and properly rank the few most relevant dresses that most users will like, just based on the advertiser’s initial keywords.


Solution: Ranking beyond keyword-based matching

Traditionally, ads were ranked based on keyword matches between the search query and ad description (e.g. "<brand name>" and "dresses"). Now, Bing can rank product ads based on other words in the product description that did not appear in the query (e.g. "floral", “jacquard”, “bodycon”) or in the initial set of advertiser’s provided keywords.
 
Since each product description’s relevance is ranked in relation to a user’s query, we consider each Query-ProductDescription to determine the best ordering. This Query-ProductDescription concept is what we refer to as the basic ranking unit. Query-ProductDescription examples for the query “<brand name> dresses on sale” are shown in Figure 3 below. Unmatched words in each new Query-ProductDescription are colored red.   

How Ranking Works: Learning unmatched word importance from clicks

For a given query, the importance of unmatched words in a product’s description can be learned from other products that were clicked in the past for this query, and that contain the same unmatched words. Specifically, Click Through Rate (CTR) captures how much the user likes a product. For example, given a past identical query “<brand name> dresses on sale”, if the product “<brand name> jacquard midi dress” has higher CTR than another product “<brand name> bodycon midi dress”, you can infer that the unmatched product word “jacquard” is likely more important than “bodycon”. Importance in this case refers to popularity. “Jacquard” refers to dresses with an intricately woven pattern while “bodycon” refers to tight fitting, figure hugging dresses. Feels like jacquard dresses are comfortable hence more popular than bodycon!

To extend the amount of data that can be used to find additional important words, you don’t have to limit yourself to identical queries issued in the past (“<brand name> dresses on sale”). You can also look at past queries that were similar such as “<brand name> dresses online” that contains the same query keyword “dresses”.
 
To enable learning from such similar queries, the query keywords (“dresses”) that co-occur with unmatched product description words are represented as Query-Product word pairs like “dresses-jacquard”, “dresses-bodycon”, etc. Query-Product word pairs are then used as features to represent the original Query-ProductDescription strings.


Once the Query-Product word pairs and CTR data are prepared, they are fed as training data to a machine learning model to learn Query-Product CTR patterns.
 

When the system is deployed, and a user enters a previously unseen query, the model will generate corresponding Query-Product word pair features for the query and product descriptions from the fashion advertiser’s catalogue. It will then use them to predict the CTR for each product. Products with highest predicted CTR will be ranked highest.


In summary, product ads are ranked based on descending order of predicted CTR to maximize ad relevance and user clicks for the query. This avoids the hassle of fashion advertisers having to specify all keywords manually. Interestingly, using this technique of CTR prediction, fashion retailers can also evaluate the importance of the query ad keywords they entered initially for bidding.

Additional Ranking Considerations: Product Price and Images

You may be thinking that product description words cannot be the only factor that influence a shopper’s browsing and buying behavior. And you would be right - Bing goes beyond ranking strictly based on product descriptions. Other attributes that further advance the relevance of product ads ranking include product prices provided by the advertisers, numerical features like the product’s display price and percentage discounts, or even raw product image pixels are also used together with product description features to represent each product and contribute to ranking.

Fashion Retailer Website Scenarios

The techniques would also apply if a fashion retailer wanted to implement product search on their own website. These fashion retailer website scenarios enable more relevant product search experiences to shoppers, increasing user satisfaction and click through rates.
 
Presented ranking techniques can similarly be used to maximize the relevance of product search results for shoppers who simply browse for products using the search box. As with the previously described techniques, they are effective even if the query entered by the user does not perfectly match the product description. Product search results would also in this case be ranked by predicted CTR with the same Query-ProductDescription.

Because individual fashion tastes are subjective, as a retailer or an advertiser you may want to improve product relevance further by tailoring it to individual shoppers. Building upon the general search use case, you can extend this technique to support relevant and tailored search experiences. If you have obtained a user’s permission to use their search history of products purchased or browsed, you can append the shopper’s individual context as the third component to the Query-Product ranking unit. The new basic unit of ranking would now become a Query-Product-IndividualContext triplet. Individual context information that can influence a shopper’s behavior could include descriptions of past products browsed and bought by a given shopper.


In yet another scenario, when a shopper first lands on the website’s product catalogue page without using the search query box, fashion retailers can also tailor the default ranked list of recommended products. To achieve this, the ranking unit is modified by removing the Query component so that the fundamental unit of ranking becomes the Product-IndividualContext pair.

Conclusion

We are hard at work on making the user’s experience even better. One of the future directions includes customizing ranking to fit consumer behaviors that are unique to the fashion industry. For example, fashion retailers may want to combine the importance of past clicks to given products with seasonal fashion trends, and individual shopper’s evolving fashion tastes to even better predict future behavior.  

The greater vision is to help fashion retailers large or small to showcase fashion in a way that is cost effective and beautifully relevant for each shopper. For more details on this ranking technique’s context, you can refer to Microsoft’s patent titled “Intelligent tabular big data presentation in search environment”.

Chun Ming Chin
On behalf of the Bing Ads, Search and Artificial Intelligence (AI) teams
 


In case you missed it: December 2018 roundup

$
0
0

In case you missed them, here are some articles from December of particular interest to R users.

R 3.5.2 is now available.

Roundup of AI, Machine Learning and Data Science news from December 2018.

AzureStor, a new R package to interface with Azure Storage.

How to use the "plumber" package to create an R service as a container with the AzureContainers package.

How to give money to the R Foundation in support of the R project.

The Revolutions blog is 10 years old, and I reflect on some milestones in a talk given at SatRDays DC.

Reshama Shaikh compares gender diversity within the R and Python communities.

AzureVM, a new package for managing virtual machines in Azure from R.

And some general interest stories (not necessarily related to R):

As always, thanks for the comments and please send any suggestions to me at davidsmi@microsoft.com. Don't forget you can follow the blog using an RSS reader, via email using blogtrottr, or by following me on Twitter (I'm @revodavid). You can find roundups of previous months here

Because it’s Friday: Synthetic faces, styled to your specifications

$
0
0

If you need someone's face to use in an application or some marketing materials, you might search one of the stock photography vendors for people of a given gender, skin tone, hairstyle, etc. Or, you could just ask the Generative Adversarial Network described in this paper by NVIDIA researchers to generate a brand new face for you. See it in action in the video below:

That's all for this week. Have a great weekend, and we'll be back with more from the blog next week.

Azure.Source – Volume 64

$
0
0

Updates

Azure Migrate is now available in Azure Government

The Azure Migrate service assesses on-premises workloads for migration to Azure. The service assesses the migration suitability of on-premises machines, performs performance-based sizing, and provides cost estimations for running on-premises machines in Azure. If you're contemplating lift-and-shift migrations, or are in the early assessment stages of migration, this service is for you. Azure Migrate now supports Azure Government as a migration project location. This means that you can store your discovered metadata in an Azure Government region (US Gov Virginia). In addition to Azure Government, Azure Migrate supports storing the metadata in United States and Europe geographies. Support for other Azure geographies is planned for the future.

Python 2.7 Now Available for App Service on Linux

Last month, built-in Python images for Azure App Service on Linux became available in public preview for Python 3.7 and 3.6. Python 2.7 is now available in the public preview of Python on Azure App Service (Linux). When you use the official images for Python on App Service on Linux, the platform automatically installs the dependencies specified in the requirements.txt​ file.

If you’re interested in building with Python on Azure, be sure to check out the four-part Python on Azure series with Nina Zakharenko and Carlton Gibson to get an introduction to building and running Django apps with Visual Studio Code and Azure Web Apps, setting up CI/CD pipelines with Azure Pipelines, and running serverless Django apps with Azure Functions.

News

Microsoft Certified Azure Developer Associate

Microsoft Azure Developers design, build, test, and maintain cloud solutions, such as applications and services, partnering with cloud solution architects, cloud DBAs, cloud administrators, and clients to implement these solutions. Based on feedback received about the Azure Developer Associate certification beta exams, AZ-200: Microsoft Azure Developer Core Solutions and AZ-201: Microsoft Azure Developer Advanced Solutions, the decision was taken to simplify the path and transition to a single exam, AZ-203: Developing Solutions for Microsoft Azure. By the way, Exam AZ-900: Microsoft Azure Fundamentals is as an optional first step in learning about cloud services and how those concepts are exemplified by Microsoft Azure. You can take AZ-900 as a precursor to AZ-203, but it is not a pre-requisite for it.

Microsoft Certified graphic showing various icons and the Microsoft Certified badge.

Technical content

Introduction to Cloud Storage for Developers

This introductory level post covers data storage options in a platform-agnostic way, with a focus on Azure Storage examples, to help developers understand that traditional NoSQL and SQL databases aren't the only option. Jeremy Likness shares when and why cloud storage is better option, definitions for various storage terms and concepts, simple ways to get started, and resources to learn more.

KubeCon 2018: Tutorial - Deploying Windows Apps with Kubernetes, Draft, and Helm

Curious about deploying Windows apps to Kubernetes? Would you like to use Draft and Helm, just as you would if you were deploying Linux apps or containers? Check out this blog post from Jessica Deen, which includes her session from KubeCon 2018.

Apache Spark: Tips and Tricks for Better Performance

Building on her "Apache Spark Deep Dive" exploration, Adi Polak shares her top five tips for improving Spark performance and writing better queries -- from why you should avoid custom user defined functions to understanding and optimizing your cloud configuration. In her next post, she’ll dive into how to use Apache Spark on Azure, including real life use cases.

Using Object Detection for Complex Image Classification Scenarios Part 1: The AI Computer Vision Revolution

AI and ML are theoretically as easy as consuming a few APIs, but how do you apply them to real business scenarios? In this series, you’ll walk through how major Central Eastern European candy company uses computer vision, AI, and ML to solve a problem: automatically validating store shelves are properly stocked, eliminating costly audits and manual processes. By the end of the series, you’ll understand how to compare and contrast different Machine Learning approaches and technologies, understand available services and tools, and build, train, and deploy your own custom models to the cloud and remote clusters.

Azure shows

Episode 260 - Azure Sphere | The Azure Podcast

In addition to the usual updates, Cale, Russell and Sujit break down the Azure Sphere offering from Microsoft and what it means for the future of IoT development.

Interning in Azure Engineering and the Visual Studio Code extension for ACR Build | Azure Friday

What is it like to intern at Microsoft? Scott Hanselman meets with three interns from the Microsoft Explorer Program (a cross-discipline internship designed for college freshmen and sophomores) to talk about their experience working on the Azure Container Registry and their contribution of ACR Build and Task capabilities to the Visual Studio Code Docker Extension.

Visual Azure Provisioning From a Whiteboard | The Xamarin Show

On this week's Xamarin Show, James is joined by good friend Christos Matskas who shows off a beautiful Xamarin application that is infused with AI to generate a full Azure backend just by drawing pictures on a white board. You don't want to miss this mind blowing demo and walkthrough of the code.

How the Azure DevOps teams plan with Aaron Bjork | DevOps Interviews

In this interview, Donovan Brown interviews Group Program Manager Aaron Bjork about Agile Planning.

IPFS in Azure | Block Talk

This episode will introduce the use of IPFS (Interplanatory File System) in a consortium setting. The concepts of how this technology can be helpful to remove centralization of storage that is not part of the block in the blockchain is shown. Along with this is a short demonstration of how the marketplace offering for IPFS in Azure can make creating these storage networks simple is shown.

Live demo of BeSense, an application built by Winvision on Azure Digital Twins | Internet of Things Show

Winvision has leveraged the spatial intelligence capabilities of Azure Digital Twins to build BeSense, a smart building application that provides real-time data that optimizes space utilization and occupant experience. Remco Ploeg, a Solution Architect at Winvision demos the application.

How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks

Learn how to add additional logic by using PowerShell to automatically distribute the load between your production and deployment slot sites with the Testing in Production feature.

Thumbnail for How to add logic to your Testing in Production sites with PowerShell | Azure Tips and Tricks on YouTube

Gopinath Chigakkagari on Key Optimizations for Azure Pipelines | The Azure DevOps Podcast

In this episode, Jeffrey Palermo is joined by his guest, Gopinath Chigakkagari. Gopinath hits on some fascinating points and topics about Azure Pipelines, including (but not limited to): what listeners should be looking forward to, some highlights of the new optimizations on the platform, key Azure-specific offerings, as well as his recommendations on what listeners should follow up on for more information!

Events

Microsoft Ignite | The Tour

Learn new ways to code, optimize your cloud infrastructure, and modernize your organization with deep technical training. Join us at the place where developers and tech professionals continue learning alongside experts. Explore the latest developer tools and cloud technologies and learn how to put your skills to work in new areas. Connect with our community to gain practical insights and best practices on the future of cloud development, data, IT, and business intelligence. Find a city near you and register today.

Microsoft Ignite | The Tour promotional graphic

SolarWinds Lab #72: Two Geeks and a Goddess II: Azure the Easy Way

Wednesday, January 16 – 1:00-2:00 PM Central (UTC/GMT -6)

If there’s one takeaway from 2018, it’s that most organizations now run at least some production workloads in somebody else’s data center, especially Azure -- and we're here to show you how to monitor those cloud resources with the tools you already have. Join us - Phoummala Schmitt (Microsoft Cloud Advocate), Thomas LaRock (Head Geek and 10-year Microsoft MVP). and Patrick Hubbard (Head Geek) for a special hybrid IT/Cloud operations episode. We'll have live chat and experts on hand, so come with you Azure operations questions.You'll learn how to break down remote monitoring barriers, get a telemetry plan in place before migrating your apps, manage cloud costs, throttle dev sprawl. We'll also cover the new Azure and Office 365 Server & Application Monitor (SAM) templates and account activation.

Time Series Forecasting Build and Deploy your Machine Learning Models to Forecast the Future

Wednesday, January 23 – 8:00-11:00 AM Pacific (UTC/GMT -8)

In this O'Reilly three-hour live training course, Francesca Lazzeri walks you through the core steps for building, training, and deploying your time series forecasting models. First, you’ll learn common time series forecast methods, like simple exponential smoothing and recurrent neural networks (RNN), then get hands-on experience, using machine learning components like Keras, TensorFlow, and other open source Python packages to apply models to a real-world scenario.


A Cloud Guru | Azure This Week - 4 January 2019

This time on Azure This Week, Lars talks about 2019 predictions for Azure, changes and new certificates for Azure, and a new version of the Bot Framework SDK.

Thumbnail from A Cloud Guru | Azure This Week - 4 January 2019 on YouTube

To infinity and beyond: The definitive guide to scaling 10k VMs on Azure

$
0
0

Every platform has limits, workstations and physical servers have resource boundaries, APIs may be rate-limited, and even the perceived endlessness of the virtual public cloud enforces limitations that protect the platform from overuse or misuse. You can learn more about these limitations by visiting our documentation, “Azure subscription and service limits, quotas, and constraints.” When working on scenarios that take platforms to their extreme, those limits become real and therefore thought should be put into overcoming them.

The following post includes essential notes taken from my work with Mike Kiernan, Mayur Dhondekar, and Idan Shahar. It also covers some iterations where we try to reach a limit of 10K virtual machines running on Microsoft Azure and explores the pros/cons of the different implementations.

Load tests at cloud scale

Load and stress tests before moving a new version to production are critical on the one hand, but pose a real challenge for IT on the other. This is because they require a considerable amount of resources to be available for only a short amount of time, every release-cycle. When purchased the infrastructure doesn’t justify its cost over extended periods, making this a perfect use-case for a public cloud platform where payment is billed only per usage.

This post is in fact based on a customer we’ve been working with, and discusses challenges we have met. However, the provided solution is general enough to be used for other use cases where large clusters of VMs in Azure exist, such as:

  • Scaling requirements beyond a single VMSS, and the cluster is static in size once provisioned (HPC clusters).
  • DDoS simulation – Please note, in this case ethics must be practiced and the targeted endpoint should be owned by you, otherwise you assume risk the liability for damages.

The process

    At a high level, to provision and initialize a cluster of x VMs that “do something” the following steps should be taken:

    • Start from a base image.
    • Provision x VMs from the base image.
    • Download and install required software and data to each VM.
    • Start the “do-something” process on each VM.

    However, given the targeted hyper-scale there are a number of critical elements that must be taken into account. It quickly becomes clear that the concerns of implementing such scenarios are as much about management, cost optimization, and avoiding platform limits as they are about infrastructure and the provisioning process.

    • How do you manage 10K VMs? How do you even count them?
    • What is the origin of data and can it handle the load of 10K concurrent downloads?
    • How would you know that the process completes?
    • Can the cloud provide 10K VMs in one region and which?
    • How long would it take to provision and reach its scale?

    The next section describes a load-test scenario implemented using different services and tackling the questions raised previously with the following goals:

    • Generate stress on a backend service located in some other datacenter using client machines (VMs) in Azure.
    • Trigger the process using HTTP POST.
    • Avoid manual steps, pre-requisites, and custom images which may be outdated over time.
    • Minimal time to reach a full-scale cluster.

    The solution outline

      Load-test scenario implementaed using various servies flow chart
      Read more about all the details of the solution in the blog post, “To Infinity and Beyond (or: The Definitive Guide to Scaling 10k VMs on Azure).” You can also see the solution code and deployment scripts on GitHub.

      Teradata to Azure SQL Data Warehouse migration guide

      $
      0
      0

      With the increasing benefits of cloud-based data warehouses, there has been a surge in the number of customers migrating from their traditional on-premises data warehouses to the cloud. Microsoft Azure SQL Data Warehouse (SQL DW) offers the best price to performance when compared to its cloud-based data warehouse competitors. Teradata is a relational database management system and is one of the legacy on-premises systems that customers are looking to migrate from.

      The Teradata to SQL DW migrations involve multiple steps. These steps include analyzing the existing workload, generating the relevant schema models, and performing the ETL operation. The intent of this discussed whitepaper is to provide guidance for these aforesaid migrations with emphasis on the migration workflow, the architecture, technical design considerations, and best practices.

      Migration Phases

      Migration phases flow chart

      The Teradata migration should pivot on the following six areas. Though recommended, proof of concept is an alternative step. With the benefit of Azure, you can quickly provision Azure SQL Data Warehouses for your development team to start business object migration before the data is migrated and speed up the migration process.

      Phase one – Fact finding

      Through a question and answers session you can define what your inputs and outputs are for the migration project.

      Phase two – Defining success criteria for proof of concept (POC)

      Taking the answers from phase one, you identify a workload for running a POC to validate the outputs required and run the following phases as a POC.

      Phase three: Data layer mapping options

      This phase is about mapping the data you have in Teradata to the data layout you will create in Azure SQL Data Warehouse. Some of the common scenarios are data type mapping, date and time format, and more.

      Phase four – Data modeling

      Once you’ve defined the data mappings, phase four concentrates on how to tune Azure SQL Data Warehouse. This provides the best performance for the data you will be landing into it.

      Phase five: Identify migration paths

      What is the path of least resistance? What is the quickest path given your cloud maturity? Phase five helps describe the options open to you and then for you to decide on the path you wish to take.

      Phase six: Execution of migration

      Migrating your Teradata data to SQL Data Warehouse involves a series of steps. These steps are executed in three logical stages, preparation, metadata migration, and data migration.

      Migration solution

      To ingest data, you need a basic cloud data warehouse setup for moving data from your on-premise solution to Azure SQL Data Warehouse, and to enable the development team to build Azure Analysis Cubes once the majority of the data is loaded.

      Migration solution flow chart

      • Azure Data Factory Pipeline is used to ingest and move data through the store, prep, and train pipeline.
      • Extract and load files via Polybase into the staging schema on Azure SQL DW.
      • Transform data through staging, source (ODS), EDW and sematic schemas on Azure SQL DW.
      • Azure Analysis services will be used as the sematic layer to serve thousands of end users and scale out Azure SQL DW concurrency.
      • Build operational reports and analytical dashboards on top of Azure Analysis services to serve thousands of end users via Power BI.

      For more insight into how to approach a Teradata to Azure SQL Data Warehouse migration check the following whitepaper, “Migrating from Teradata to Azure SQL Data Warehouse.”

      This whitepaper is broken into sections which detail the migration phases, the preparation required for data migration including schema migration, migration of the business logic, the actual data migration approach, and testing strategy.

      The scripts that would be useful for the migration are available in github under Teradata to Azure SQL DW Scripts.

      Cognitive Services Speech SDK 1.2 – December update – Python, Node.js/NPM and other improvements

      $
      0
      0

      Developers can now access the latest improvements to Cognitive Services Speech Service including a new Python API and more. Details below.

      Read the updated Speech Services documentation to get started today.

      Speech services updates

      What’s new

      Python API for Speech Service

      • Python 3.5 and later versions on the Windows and Linux operating systems are supported.
      • Python is the first language that the Speech Service supports on macOS X (version 10.12 and later).
      • Python modules can be conveniently installed from PyPI.

      Node.js support

      • Support for Node.js is now available, in addition to support for JavaScript in the browser. Through the npm package manager, developers can install the Speech Service module and its prerequisites.
      • The JavaScript version of Speech Service is now also available as an opensource project on GitHub.

      Linux support

      Support for Ubuntu 18.04 is now available in addition to pre-existing support for Ubuntu 16.04.

      New features by popular demand

      Lightweight SDK for greater performance

      By reducing the number of required concurrent threads, mutexes, and locks, Speech Services now offers a more lightweight SDK with enhanced error reporting.

      Control of server connectivity and connection status

      A newly added connection object enables control over when the SDK connects to the Speech Service. You can also now subscribe to receive connection notifications that report the exact time of server connection and termination.

      Unlimited audio session length support

      For JavaScript, length restrictions for recorded audio sessions have been lifted. The SDK buffers the audio file and then automatically reconnects and retransmits audio data to the service.

      Support for ProGuard during Android APK generation is also now available.

      For more details and examples for how your business can benefit from the new functionality for Speech Services, check out release notes and samples in the GitHub sample repository for Speech Services.

      Gain insight into your Azure Cosmos DB data with QlikView and Qlik Sense

      $
      0
      0

      Connecting data from various sources in a unified view can produce valuable insights that are otherwise invisible to the human eye and brain. As Azure Cosmos DB allows for collecting the data from various sources in various formats, the ability to mix and match this data becomes even more important for empowering your businesses with additional knowledge and intelligence.

      This is what Qlik’s analytics and visualization products, QlikView and Qlik Sense, have been able to do for years and now they support Azure Cosmos DB as a first-class data source. The following table summarizes the variety of connectivity options you have for to getting Azure Cosmos DB data in QlikView and Qlik Sense.

      Azure Cosmos DB API

      Connectivity Method

      Qlik detailed instruction

      Qlik live demo

      Core (SQL) API

      REST

      Connecting to Azure CosmosDB SQL API from Qlik Sense using the built-in REST Connector

      -

      Core (SQL) API

      ODBC driver

      Connecting to Azure Cosmos DB SQL API from Qlik Sense using the Azure Cosmos DB ODBC Connector

      Azure Cosmos DB ODBC – Video Game Sales

      MongoDB API

      MongoDB Wire Protocol

      Connecting to Azure Cosmos DB Mongo API from Qlik Sense using the Qlik MongoDB Connector

      Azure Cosmos DB via Mongo DB API using Qlik Connector

      MongoDB API

      Qlik gRPC connector

      Same as MongoDB Wire Protocol

      -

      Qlik Sense and QlikView are data visualization tools that combine the data from different sources into a single view. Qlik Sense indexes every possible relationship between entities in the data so that you can gain immediate insights into it without making the connections manually. You can visualize Azure Cosmos DB data by using Qlik Sense.

      Here is a step-by-step guide on how to set up a Azure Cosmos DB account and configure the ODBC connection to it in Qlik Sense.

      1. Create a Core (SQL) API account in Azure Cosmos DB.

      Create Azure Cosmos DB account screenshot

      2. Create a database with a collection in it. Keep in mind that Azure Cosmos DB allows you to provision throughput for your databases and collections as described as Request units in Azure Cosmos DB article.

      3. Import the data. There are many ways to load data into Azure Cosmos DB collection, the simplest way is to use a tool called Azure Cosmos DB Data Migration tool. You can find a connection string on the keys page in the portal.

      Qlik Cosmos SQL Keys screenshot

      4. Next in Qlik Sense, you need to install an ODBC driver for Azure Cosmos DB and configure it following the instructions providing in our documentation, “Connect to Azure Cosmos DB using BI analytics tools with the ODBC driver.”

      5. Open your app in Qlik Sense and click Add data from files and other sources. Select ODBC and configure an ODBC connection you created in previous step.

      Create new connection (ODBC) screenshot

      6. Next, choose the database and the collection with the imported data.

      Add data to Cosmos DB SQL API - ODBC screenshot

      7. Add data to your app and configure your data insight visualizations. The following picture shows an example of the resulting view.

      Video Game Sales Dashboard example screenshot

      To learn more about Qlik tools and how to use them with Azure Cosmos DB please see the following resources.

      Connect Qlik Sense to Azure Cosmos DB using our documentation, “Connect Qlik Sense to Azure Cosmos DB and visualize your data” to help guide you.

      Please note, the above instructions and screenshots apply to Qlik Sense, but QlikView can also be connected to Azure Cosmos DB in a similar way. For more information visit the product pages for Qlik Sense, QlikView and Qlik Desktop.


      Relationship Hacks: Playing video games and having hobbies while avoiding resentment

      $
      0
      0

      Super Nintendo Controller from PexelsI'm going to try to finished my Relationship Hacks book in 2019. I've been sitting on it too long. I'm going to try to use blog posts to spur myself into action.

      A number of people asked me what projects, what code, what open source I did over the long holiday. ZERO. I did squat. I played video games, in fact. A bunch of them. I felt a little guilty then I got over it.

      The Fun of Finishing - Exploring old games with Xbox Backwards Compatibility

      I'm not a big gamer but I like a good story. I do single player with a plot. I consider a well-written video game to be up there with a good book or a great movie. I like a narrative and a beginning and end. Since it was the holidays, it did require some thought to play games.

      When you're in a mixed relationship (a geek/techie and a non-techie) you need to be respectful of your partner's expectations. The idea of burning 4-6 hours playing games likely doesn't match up with your partner's idea of a good time. That's where communication comes in. We've found this simple system useful. It's non-gendered and should work for all types of relationships.

      My spouse and I sat down at the beginning of our holiday vacation and asked each other "What do you hope to get out of this time?" Setting expectations up front avoids quiet resentment building later. She had a list of to-dos and projects, I wanted to veg.

      Sitting around all day (staycation) is valid, as is using the time to take care of business (TCB). We set expectations up front to avoid conflict. We effectively scheduled my veg time so it was planned and accepted and it was *ok and guilt-free*

      We've all seen the trope of the gamer hyper-focused on their video game while the resentful partner looks on. My spouse and I want to avoid that - so we do. If she knows I want to immerse myself in a game, a simple heads up goes a LONG way. We sit together, she reads, I play.

      It's important to not sneak these times up on your partner. "I was planning on playing all night" can butt up against "I was hoping we'd spend time together." Boom, conflict and quiet resentment can start. Instead, a modicum of planning. A simple headsup and balance helps.

      I ended up playing about 2-3 days a week, from around 8-9pm to 2am (so a REAL significant amount of time) while we hung out on the other 4-5 days. My time was after the kids were down. My wife was happy to see me get to play (and finish!) games I'd had for years.

      Also, the recognition from my spouse that while she doesn't personal value my gaming time - she values that *I* value it. Avoid belittling or diminishing your partner's hobby. If you do, you'll find yourself pushing (or being pushed) away.

      One day perhaps I'll get her hooked on a great game and one day I'll enjoy a Hallmark movie. Or not. ;) But for now, we enjoy knowing and respecting that we each enjoy (and sometimes share) our hobbies. End of thread.

      If you enjoy my wife's thinking, check her out on my podcast The Return of Mo. My wife and I also did a full podcast with audio over our Cancer Year 

      Hope you find this helpful.


      Sponsor: Preview the latest JetBrains Rider with its Assembly Explorer, Git Submodules, SQL language injections, integrated performance profiler and more advanced Unity support.



      © 2018 Scott Hanselman. All rights reserved.
           

      Azure DevOps Agents on Azure Container Instances (ACI)

      $
      0
      0

      This article provides a solution for running Azure DevOps agents (Build/Release agents) on Windows Server Core based containers, hosted on Azure Container Instances (ACI). A solution like this might be useful, when the default Microsoft-hosted agents don’t fit your requirements, and you don’t prefer using “traditional” IaaS VMs for running your self-hosted agents.

      One of our customers decided to use Azure DevOps as their primary automation tool for Azure workloads. To achieve this, the following components are needed:

      • Private Git repositories hosted on Azure DevOps to store the configuration scripts (e.g. Terraform and ARM templates, and PowerShell scripts).
      • Build/Release pipelines to define which activities (e.g. applying Terraform templates or running PowerShell scripts) should be run in which order, with what parameters, in which Azure subscription, etc.
      • A place to run these pipelines from. There are 2 options for this: using the Microsoft-hosted agents or implementing a set of self-hosted ones. Reminder: agents provide runtimes for pipelines on VMs.

      As this customer wanted to have full control over the agent machines, had concerns about using a shared, multi-tenant platform for this purpose, and was afraid of hitting any of the technical limitations of the platform-provided agents, the default choice of using Microsoft-hosted agents fell quickly into the no-go category.

      To fulfill these requirements, namely to enable the customer to run Build/Release pipelines in a scalable, customized and highly secured environment, we clearly needed to implement a set of self-hosted agents.

      Requirements summary:

      • Scalability: a solution which can both scale-up (i.e. the number of CPU cores and the amount of RAM can be increased) and scale out (multiple instances can be added to support parallel deployments) was desired.
      • Customization: this includes adding extra components, such as PowerShell modules, json2hcl or the Terraform executable, and not having unnecessary and unused components installed.
      • Security: interactive logons have to be prohibited, incoming requests on all ports of these machines must be denied.

      As the customer prefers using PaaS services over IaaS VMs, and has strict security requirements, using the well-documented approach for deploying agents on VMs – what you would normally do – could not come into question.

      Luckily, Azure offers this relatively new service, called Azure Container Instances (ACI), using which customers can run containers without managing the underlying infrastructure. As this service is now generally available (GA), it was a perfect choice for this customer.

      This solution is simple, yet secure and has multiple benefits over using IaaS VM:

      • It does not use public IP’s. It doesn’t need one, as the Azure DevOps agent initiates the communication to the service.
      • It does not have any exposed ports. There’s no need for publishing anything.
      • Logging in to these containers is not possible (console and network access is not available) – only the configuration script’s outputs can be read, as these are printed to the console.
      • There’s no need to maintain a VNET or any other infrastructure pieces, as these ACI instances can exist on their own.
      • Has a lightweight footprint.
      • Can be provisioned very quickly: to fully configure a container instance with the required components takes 5-10 minutes.
      • Is immutable: it does not need patching/management. For version upgrades, the existing instances have to be deleted, and the new ones can easily be re-created by running the provision scripts again.

      How does the solution work?

      The solution consists of 3 scripts – all have to be placed to the same folder:

      1. Initialize-VstsAgentOnWindowsServerCoreContainer.ps1 – the external, “wrapper” script.
      2. Install-VstsAgentOnWindowsServerCoreContainer.ps1 – the container configuration script (internal script, runs inside of the containers) – this should never be invoked directly.
      3. Remove-VstsAgentOnWindowsServerCoreContainer.ps1 – removal script that can be used to delete containers that are no longer required.

      1. Initialize-VstsAgentOnWindowsServerCoreContainer.ps1

      The wrapper script can be invoked from any location (including Azure Cloud Shell), that has the required components installed (see the prerequisites below).

      The wrapper script copies the internal, container configuration script to a publicly available storage container of the requested Storage Account, it creates a new Resource Group (if one doesn’t exist with the provided name), removes any pre-existing ACI containers with the same name, within the same Resource Group, then creates new ACI container instance(s) based on the provided names and invokes the container configuration script inside the container(s).

      The container(s) are based on the latest version of the official Windows Server Core image (microsoft/windowsservercore LTSC) available on Docker Hub.

      2. Install-VstsAgentOnWindowsServerCoreContainer.ps1

      The internal, container configuration script downloads and installs the latest available version of the Azure DevOps agent, and registers the instance(s) to the selected Agent Pool. It also configures the instance(s) with the latest version of Terraform, json2hcl and the selected PowerShell modules (by default AzureRM, AzureAD, Pester).

      After the successful configuration, it prints the available disk space and keeps periodically checking that the “vstsagent” service is in running state. Failure of this service will cause the Container instance to be re-initialized. If this happens and the PAT token is still valid, the container will auto-heal itself. If the PAT token has already been revoked, or has been expired by this time, the container re-creation will fail.

      3. Remove-VstsAgentOnWindowsServerCoreContainer.ps1

      This removal script removes the selected Azure Container Instance(s). It leaves the Resource Group (and other resources within it) intact.

      Prerequisites

      • Azure Subscription, with an existing Storage Account
      • Access to Azure Cloud Shell (as the solution has been developed for and tested in Cloud Shell)
      • You need to have admin rights:
        • To create a storage container within the already existing Storage Account – OR – a storage container with the public access type of “Blob” has to exist,
        • To create a new Resource Group – OR – an existing Resource Group for the Azure Container Instances,
        • To create resources in the selected Resource Group.
      • Azure DevOps account with the requested Agent Pool has to exist.
      • Permission in the Azure DevOps account to add agents to the chosen Agent Pool.
      • A PAT token.

      How to manage the solution’s lifecycle

      1. Initialize ACI containers in Azure Cloud Shell

      • Get access to the Azure DevOps account where you would like to create the new agents. You might need to have rights to create a new Agent Pool if the requested one doesn’t exist.
      • Get a PAT token for agent registration (Agent Pools: read, manage; Deployment group: read, manage). To see detailed description of this step, visit the Deploy an agent on Windows page of Azure DevOps documentation.
      • Get access to the Azure Subscription where you need to deploy the ACI containers. See more details in the Prerequisites section above.
      • Get access to Azure Cloud Shell. See Quickstart for PowerShell in Azure Cloud Shell for more details.
      • Copy the below .ps1 script files to your Cloud Shell area:
      • Run the below script to create containers in your selected region
        Initialize-VstsAgentOnWindowsServerCoreContainer.ps1 -SubscriptionName "<subscription name>" -ResourceGroupName "<resource group name>" -ContainerName "<container 1 name>", "<container 2 name>", "<container n name>" -Location "<azure region 1>" -StorageAccountName "<storage account name>" -VSTSAccountName "<azure devops account name>" -PATToken "<PAT token>" -PoolName "<agent pool name>"
      • In case you would like to have containers in any additional regions, re-run the script with a different “Locaion” parameter:
        Initialize-VstsAgentOnWindowsServerCoreContainer.ps1 -SubscriptionName "<subscription name>" -ResourceGroupName "<resource group name>" -ContainerName "<container 1 name>", "<container 2 name>", "<container n name>" -Location "<azure region 2>" -StorageAccountName "<storage account name>" -VSTSAccountName "<azure devops account name>" -PATToken "<PAT token>" -PoolName "<agent pool name>"
      • If the container creation procedure fails, Azure automatically retries creating a new instance.
      • Read the logs of each container.
        • In case of success, you can decide if you want to delete your PAT token. See more in Revoke personal access tokens to remove access.
        • In case any unhandled errors occurred, you can re-run the script for the instance in question, using the “-ReplaceExistingContainer” switch (see the description below in the Update ACI containers section).
      • As long as your PAT token is valid, you can remove the agents’ registration on the Azure DevOps portal. This will trigger the container to stop the Azure DevOps (VSTS) service, restart, and reapply all the settings defined in the container configuration script (“Install-VstsAgentOnWindowsServerCoreContainer”).
      • If the PAT token has already been removed, in order to update/re-register the containers, you’ll have generate a new PAT token, remove the existing containers, and re-run the “Initialize-VstsAgentOnWindowsServerCoreContainer.ps1” script.

      2. Update ACI containers

      • If you would like to update your existing ACI containers, you can re-run the same “.Initialize-VstsAgentOnWindowsServerCoreContainer.ps1” script using the “-RequiredPowerShellModules” switch as follows:
        Initialize-VstsAgentOnWindowsServerCoreContainer.ps1 -SubscriptionName "<subscription name>" -ResourceGroupName "<resource group name>" -ContainerName "<container 1 name>", "<container 2 name>", "<container n name>" -Location "<azure region 2>" -StorageAccountName "<storage account name>" -VSTSAccountName "<azure devops account name>" -PATToken "<PAT token>" -PoolName "<agent pool name>" -ReplaceExistingContainer
      • This will wipe out the existing container(s), and re-register the new one(s) with the same name. Note that this will create a new agent registration in Azure DevOps, as the agent names are generated based on the following pattern: <ACI container name>-<Date-Time in the format of yyyyMMdd-mmhhss>
      • Once the new container(s) have bee provisioned, the old agents become orphaned. These have to be manually deprovisioned on the Azure DevOps portal.

      3. Delete ACI containers

      • To remove the ACI containers that are no longer required, run the below script:
        Remove-VstsAgentOnWindowsServerCoreContainer.ps1 -SubscriptionName "<subscription name>" -ResourceGroupName "<resource group name>" -ContainerName "<container 1 name>", "<container 2 name>", "<container n name>"
      • Once the containers have been removed, the agents on the Azure DevOps portal become orphaned. These have to be manually deprovisioned (deleted) on the portal.

      You can find the GitHub repo here: https://github.com/matebarabas/azure/tree/master/scripts/AzureDevOpsAgentOnACI

      New Azure Migrate and Azure Site Recovery enhancements for cloud migration

      $
      0
      0

      We are continuously enhancing our offerings to help you in your digital transformation journey to the cloud. You can read more about these offerings in the blog, “Three reasons why Windows Server and SQL Server customers continue to choose Azure.” In this blog, we will go over some of the new features added to Microsoft Azure Migrate and Azure Site Recovery that will help you in your lift and shift migration journey to Azure.

      Azure Migrate

      Azure Migrate allows you to discover your on-premises environment and plan your migration to Azure. Based on popular demand, we have now enabled Azure Migrate in two new geographies, Azure Government and Europe. Support for other Azure geographies will be enabled in future.

      Below is the list of regions within the Azure geographies where the discovery and assessment metadata is stored.

      Geography Region for metadata storage
      United States West Central US, East US
      Europe North Europe, West Europe
      Azure Government U.S. Gov Virginia

      When you create a migration project in the Azure portal, the region for metadata storage is randomly selected. For example, if you create a project in the United States, we will automatically select the region as West Central US or East US. If you are specific about storing the metadata in a certain region in the geography, you can use our REST APIs to create the migration project and can specify the region accordingly in the API request.

      Note, the geography selection does not restrict you from planning your migration for other Azure target regions. Azure Migrate allows you to specify more than 30 Azure target regions for migration planning. You can learn more by visiting our documentation, “Customize an assessment.”

      Azure Site Recovery

      Azure Site Recovery (ASR) helps you migrate your on-premises virtual machines (VMs) to IaaS VMs in Azure, this is the lift and shift migration. We are listening to your feedback and have recently made enhancements in ASR to make your migration journey even more smooth. Below is the list of enhancements recently done in ASR:

      • Support for physical servers with UEFI boot type: VMs with UEFI boot type are not supported in Azure. However, ASR allows you to migrate such on-premises Windows servers to Azure by converting the boot type of the on-premises servers to BIOS while migrating them. Previously, ASR supported conversion of boot type for only virtual machines. With the latest update, ASR now also supports migration of physical servers with UEFI boot type. The support is restricted to Windows machines only (Windows Server 2012 R2 and above).
      • Linux disk support: Previously, ASR had certain restrictions regarding directories in Linux machines, it required the directories such as /(root), /boot, /usr, and more to be on the same OS disk of the VM in order to migrate it. Additionally, it did not support VMs that had /boot on an LVM volume and not on a disk partition. With the latest update, ASR now supports directories in different disks and also supports /boot on an LVM volume. This essentially means, ASR allows migration of Linux VMs with LVM managed OS and data disks, and directories on multiple disks. You can learn more by visiting our documentation, “Support matrix for disaster recovery of VMware VMs and physical servers to Azure.”
      • Migration from anywhere: ASR helps you migrate any kind of server to Azure no matter where it runs, private cloud or public cloud. We are happy to announce that the guest OS coverage for AWS has now expanded, and ASR now supports the following operating systems for migration of AWS VMs to Azure.
      Source OS versions
      AWS
      • RHEL 6.5+ New
      • RHEL 7.0+ New
      • CentOS 6.5+ New
      • CentOS 7.0+ New
      • Windows Server 2016
      • Windows Server 2012 R2
      • Windows Server 2012
      • 64-bit version of Windows Server 2008 R2 SP1 or later

      Learn more about how you can migrate from AWS to Azure in our documentation, “Migrate Amazon Web Services (AWS) VMs to Azure.”

      VMware and physical servers Get more details on the supported OS versions by reading our documentation, “Support matrix for disaster recovery of VMware VMs and Physical servers to Azure.”
      Hyper-V Guest OS agnostic

      We are listening and continuously enhancing these services. If you have any feedback or have any ideas, do use our UserVoice forums for Azure Migrate and ASR and let us know.

      If you are new to these tools, get started at the Azure Migration Center. Make sure you also start your journey right by taking the free Assessing and Planning for Azure Migration course offered by Microsoft.

      Streamlined development experience with Azure Blockchain Workbench 1.6.0

      $
      0
      0

      We’re happy to announce the release of Azure Blockchain Workbench 1.6.0. It includes new features such as application versioning, updated messaging, and streamlined smart contract development. You can deploy a new instance of Workbench through the Azure portal or upgrade existing deployments to 1.6.0 using our upgrade script.

      Please note the breaking changes section, as the removal of the WorkbenchBase base class and the changes to the outbound messaging format will require modifications to your existing applications.

      This update includes the following improvements:

      Application versioning

      One of the most popular feature requests from you all has been that you would like to have an easy way to manage and version your Workbench applications instead of having to manually change and update your applications as you are in the development process.

      We’ve continued to improve the Workbench development story with support for application versioning with 1.6.0 via the web app as well as the REST API. You can upload new versions directly from the web application by clicking “Add version.” Note that if you have any changes in the application role name, the role assignment will not be carried over to the new version.

      Blockchain Workbench applications screenshot

      Blockchain Workbench adding new version screenshot

      Blockchain Workbench confirming new version upload screenshot

      You can also view the application version history. To view and access older versions, select the application and click “version history” in the command bar. Note, that as of now by default older versions are read only. If you would like to interact with older versions, you can explicitly enable the previous versions.

      Blockchain Workbench version history screenshot

      Blockchain Workbench new version workflow screenshot

      New egress messaging API

      Workbench provides many integration and extension points, including via a REST API and a messaging API. The REST API provides developers a way to integrate to blockchain applications. The messaging API is designed for system to system integrations.

      In our previous release, we enabled more scenarios with a new input messaging API. In 1.6.0, we have implemented an enhanced and updated output messaging API which publishes blockchain events via Azure Event Grid and Azure Service Bus. This enables downstream consumers to take actions based on these events and messages such as, sending email notifications when there are updates on relevant contracts on the blockchain, or triggering events in existing enterprise resource planning (ERP) systems.

      Input and output messaging API flowchart

      Here is an example of a contract information message with the new output messaging API. You’ll get the information about the block, a list of modifying transactions for the contract, as well as information about the contract itself such as contract ID and contract properties. You also get information on whether or not the contract was newly created or if a contract update occurred.

      {
           "blockId": 123,
           "blockhash": "0x03a39411e25e25b47d0ec6433b73b488554a4a5f6b1a253e0ac8a200d13f70e3",
           "modifyingTransactions": [
               {
                   "transactionId": 234,
                   "transactionHash": "0x5c1fddea83bf19d719e52a935ec8620437a0a6bdaa00ecb7c3d852cf92e18bdd",
                   "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dadb1",
                   "to": "0xf8559473b3c7197d59212b401f5a9f07b4299e29"
               },
               {
                   "transactionId": 235,
                   "transactionHash": "0xa4d9c95b581f299e41b8cc193dd742ef5a1d3a4ddf97bd11b80d123fec27506e",
                   "from": "0xd85e7262dd96f3b8a48a8aaf3dcdda90f60dadb1",
                   "to": "0xf8559473b3c7197d59212b401f5a9f07b4299e29"
               }
           ],
           "contractId": 111,
           "contractLedgerIdentifier": "0xf8559473b3c7197d59212b401f5a9f07b4299e29",
           "contractProperties": [
               {
                   "workflowPropertyId": 1,
                   "name": "State",
                   "value": "0"
               },
               {
                   "workflowPropertyId": 2,
                   "name": "Description",
                   "value": "1969 Dodge Charger"
               },
               {
                   "workflowPropertyId": 3,
                   "name": "AskingPrice",
                   "value": "30000"
               },
               {
                   "workflowPropertyId": 4,
                   "name": "OfferPrice",
                   "value": "0"
               },
               {
                   "workflowPropertyId": 5,
                   "name": "InstanceOwner",
                   "value": "0x9a8DDaCa9B7488683A4d62d0817E965E8f248398"
               },
           ],
           "isNewContract": false,
           "connectionId": 1,
           "messageSchemaVersion": "1.0.0",
           "messageName": "ContractMessage",
           "additionalInformation": {}
      }

      Read more about the newly designed messaging API in our documentation “Azure Blockchain Workbench messaging integration.” Note that this redesign of the output messaging model will impact existing integrations you have done.

      WorkbenchBase class is no longer needed in contract code

      For customers who have been using Workbench, you will know that there is a specific class that you need to include in your contract code, called WorkbenchBase. This class enabled Workbench to create and update your specified contract. When developing custom Workbench applications, you would also have to call functions defined in the WorkbenchBase class to notify Workbench that a contract had been created or updated.

      With 1.6.0, this code serving the same purpose as WorkbenchBase will now be autogenerated for you when you upload your contract code. You will now have a more simplified experience when developing custom Workbench applications and will no longer have bugs or validation errors related to using WorkbenchBase. See our updated samples, which have WorkbenchBase removed.

      This means that you no longer need to include the WorkbenchBase class nor any of the contract update and contract created functions defined in the class. To update your older Workbench applications to support this new version, you will need to change a few items in your contract code files:

      • Remove the WorkbenchBase class.
      • Remove calls to functions defined in the WorkbenchBase class (ContractCreated and ContractUpdated).

      If you upload an application with WorkbenchBase included, you will get a validation error and will not be able to successfully upload until it is removed. For customers upgrading to 1.6.0 from an earlier version, your existing Workbench applications will be upgraded automatically for you. Once you start uploading new versions, they will need to be in the 1.6.0 format.

      Get available updates directly from within Workbench

      Whenever a Workbench update is released, we announce the updates via the Azure blog and post release notes in our GitHub. If you’re not actively monitoring these announcements, it can be difficult to figure out whether or not you are on the latest version of Workbench. You might be running into issues while developing which have already been fixed by our team with the latest release.

      We have now added the capability to view information for the latest updates directly within the Workbench UI. If there is an update available, you will be able to view the changes available in the newest release and update directly from the UI.

      screenshot of the latest updates available within the Workbench UI

      Breaking changes in 1.6.0

      • WorkbenchBase related code generation: Before 1.6.0, the WorkbenchBase class was needed because it defined events indicating creation and update of Blockchain Workbench contracts. With this change, you no longer need to include it in your contract code file, as Workbench will automatically generate the code for you. Note that contracts containing WorkbenchBase in the Solidity code will be rejected when uploaded.
      • Updated outbound messaging API: Workbench has a messaging API for system to system integrations. We have had an outbound messaging API which has been redesigned. The new schema will impact the existing integration work you have done with the current messaging API. If you want to use the new messaging API you will need to update your integration specific code.
        • The name of the service bus queues and topics has been changed in this release. Any code that points to the service bus will need to be updated to work with Workbench version 1.6.0.
          • ingressQueue - the input queue on which request messages arrive.
          • egressTopic - the output queue on which update and information messages are sent.
          • The messages delivered in version 1.6.0 are in a different format. Existing code that interrogates the messages from the messaging API and takes action based on its content will need to be updated. You can read more about the newly designed messaging API in our documentation “Azure Blockchain Workbench messaging integration.”
      • Workbench application sample updates: All Workbench applications sample code are updated since we no longer need the WorkbenchBase class in contract code. If you are on an older version of Workbench and use the samples on GitHub, or vice versa, you will see errors. Upgrade to the latest version of Workbench if you want to use samples.

      You can stay up to date on Azure Blockchain by following us on Twitter @MSFTBlockchain. Please use our Blockchain User Voice to provide feedback and suggest features/ideas for Workbench. Your input is helping make this a great service. We look forward to hearing from you.

      Multi-modal topic inferencing from videos

      $
      0
      0

      Any organization that has a large media archive struggles with the same challenge – how can we transform our media archives into business value? Media content management is hard, and so is content discovery at scale. Content categorization by topics is an intuitive approach that makes it easier for people to search for the content they need. However, content categorization is usually deductive and doesn’t necessarily appear explicitly in the video. For example, content that is focused on the topic of ‘healthcare’ may not actually have the word ‘healthcare’ presented in it, which makes the categorization an even harder problem to solve. Many organizations turn to tagging their content manually, which is expensive, time-consuming, error-prone, requires periodic curation, and is not scalable.

      In order to make this process much more consistent and effective, cost and timewise, we introduce Multi-modal topic inferencing in Video Indexer. This new capability can intuitively index media content using a cross-channel model to automatically infer topics. The model does so by projecting the video concepts onto three different ontologies - IPTC, Wikipedia, and the Video Indexer hierarchical topic ontology (see more information below). The model uses transcription (spoken words), OCR content (visual text), and celebrities recognized in the video using the Video Indexer facial recognition model. The three signals together capture the video concepts from different angles much like we do when we watch a video.

      Figure 1 - Topics on Video Indexer portal

      Topics vs. Keywords

      Video Indexer’s legacy Keyword Extraction model highlights the significant terms in the transcript and the OCR texts. Its added value comes from the unsupervised nature of the algorithm and its invariance to the spoken language and the jargon. The main difference between the existing keyword extraction model and the topics inference model is that the keywords are explicitly mentioned terms whereas the topics are inferred, for example, higher-level implicit concepts by using a knowledge graph to cluster similar detected concepts together.

      Example

      Let’s look at the opening keynote of the Microsoft Build 2018 developers’ conference which presented numerous products and features as well as the vision of Microsoft for the near future. The main theme of Microsoft leadership was how AI and ML are infused into the cloud and edge. The video is over three and a half hours long which would take a while to manually label. It was indexed by Video Indexer and yielded the following topics: Technology, Web Development, Word Embeddings, Serverless Computing, Startup Advice and Strategy, Machine Learning, Big Data, Cloud Computing, Visual Studio Code, Software, Companies, Smartphones, Windows 10, Inventions, and Media Technology.

      The experience

      Let’s continue with the Build keynote example. The topics are available both on the Video Indexer portal on the right as shown in Figure 2, as well as through the API using the Insights JSON like in Figure 3 where both IPTC topics like “Science and Technology” and Wikipedia categories topics like “Software” appear side by side.

      Figure 2- Video Indexer insights with topics on the right

      Figure 3- Insights JSON example of all ontologies

      Under the hood

      The artificial intelligence models applied under the hood in Video Indexer are illustrated in Figure 4. The diagram represents the analysis of a media file from its upload, shown on the left-hand side, to the insights on the far-right hand side. The bottom channel applies multiple computer vision algorithms, OCR, Face Recognition. Above, you’ll find the audio channel starting from fundamental algorithms such as language identification and speech-to-text, higher level models like keyword extraction, and topic inference, which are based on natural language processing algorithms. This is a powerful demonstration of how Video Indexer orchestrates multiple AI models in a building block fashion to infer higher level concepts using robust and independent input signals from different sources.

      Figure 4 - Video Indexer AI models under the hood

      Video Indexer applies two models to extract topics. The first is a deep neural network that scores and ranks the topics directly from the raw text based on a large proprietary dataset. This model maps the transcript in the video with the Video Indexer Ontology and IPTC. The second model applies spectral graph algorithms on the named entities mentioned in the video. The algorithm takes input signals like the Wikipedia IDs of the celebrities recognized in the video, which is structured data with signals like OCR and transcript that are unstructured by nature. To extract the entities mentioned in the text, we use Entity Linking Intelligent Service aka ELIS. ELIS recognizes named entities in free-form text so that from this point on we can use structured data to get the topics. We later build a graph based on the similarity of the entities’ Wikipedia pages and cluster it to capture different concepts within the video. The final phase ranks the Wikipedia categories according to its posterior probability to be a good topic where two examples per cluster are selected. The flow is illustrated in Figure 5.

      Figure 5 – Multi-modal topics inference model flow

      Ontologies

      Wikipedia categoriesCategories are tags that could be used as topics. They are well edited, and with 1.7 million categories, the value of this high-resolution ontology is both in its specificity and its graph-like connections with links to articles as well as other categories.

      Video Indexer Ontology – The Video Indexer Ontology is a proprietary hierarchical ontology with over 20,000 entries and a maximum depth of three layers.

      IPTC – The IPTC ontology is popular among media companies. This hierarchically structured ontology can be explored on IPTC's NewsCode. IPTC topics are provided by Video Indexer per most of the Video Indexer ontology topics from the first level layer of IPTC.

      The bottom line

      Video Indexer’s topic model empowers media users to categorize their content using an intuitive methodology and optimize their content discovery. Multi-modality is a key ingredient for recognizing high-level concepts in video. Using a supervised deep learning-based model along with an unsupervised Wikipedia knowledge graph, Video Indexer can understand the inner relations within media files, and therefore provide a solution that is accurate, efficient, and less expensive than manual categorization.

      If you want to convert your media content into business value, check out Video Indexer. If you’ve indexed videos in the past, we encourage you to re-index your files to experience this exciting new feature.

      Have questions or feedback? Using a different media ontology and want it in Video Indexer? We would love to hear from you!

      Visit our UserVoice to help us prioritize features, or email VISupport@Microsoft.com with any questions.

      AzureR packages now on CRAN

      $
      0
      0

      The suite of AzureR packages for interfacing with Azure services from R is now available on CRAN. If you missed the earlier announcements, this means you can now use the install.packages function in R to install these packages, rather than having to install from the Github repositories. Updated versions of these packages will also be posted to CRAN, so you can get the latest versions simply by running update.packages.

      The packages are:

      • AzureVM (CRAN): a package for working with virtual machines and virtual machine clusters. It allows you to easily deploy, manage and delete a VM from R. A number of templates are provided based on the Data Science Virtual Machine, or you can also supply your own template. Introductory vignette.
      • AzureStor: a package for working with storage accounts. In addition to a Resource Manager interface for creating and deleting storage accounts, it also provides a client interface for working with the storage itself. You can upload and download files and blobs, list file shares and containers, list files within a share, and so on. It supports authenticated access to storage via either a key or a shared access signature (SAS). Introductory vignette.
      • AzureContainers: a package for working with containers in Azure: specifically, it provides an interface to Azure Container InstancesAzure Container Registry and Azure Kubernetes Service. You can easily create an image, push it to an ACR repo, and then deploy it as a service to AKS. It also provides lightweight shells to docker, kubectl and helm (assuming these are installed). Vignettes on deploying with the plumber package and with Microsoft ML Server
      • AzureRMR: the base package of the AzureR family for interfacing with the Azure Resource Manager and which can be extended to interface with other services. There's an introductory vignette, and another on extending AzureRMR.

      Click on the first links in each bullet above for a blog post introducing each package. The introductory vignette listed with each package provides further details, and will be up-to-date for the latest version of the package.

      Feedback is welcome on all of these packages, which can be found as part of the cloudyr project on Github.

      LOB ISV Engagement for Dynamics 365 and the Power Platform

      $
      0
      0

      Note: this post was originally posted here.

      Throughout my career I have had the opportunity to work with businesses, partners and public entities around the world to leverage Microsoft technologies within their organizations. I’ve also worked closely with our ecosystem partners; both hardware (OEM) and software (ISV), to team up with Microsoft in the creation of their offerings and the growth of their businesses. These engagements have spanned from two-person start-ups to Fortune 1000 corporations and everyone in between. Working with the ecosystem to drive mutual success and watching people and companies grow has been one of my favorite areas to focus on over the last 20+ years at Microsoft. In that spirit, I’m happy to add working with ISVs back into my remit as a complement to the work I’ve been doing around AI. Starting in November 2018 my role expanded to include engagement with the broad set of Line-of-Business (LOB) ISVs that build on the Microsoft platforms, with a focus on growing the Dynamics 365 ISV community. This continues to be an incredible time for ISVs as both customers and partners continue their digital transformation and look to their ISVs to help them along the way. The latest step in that journey for Microsoft has been the transformation of the Dynamics 365 product line to a set of business SaaS offerings along with the creation of a platform asset (Power platform) that all LOB solutions can leverage in their transformation. Given our experience in transforming our own platforms and solutions from on-premise offerings to cloud offerings, we look forward to continuing helping ISVs transform and grow based on the state of their solutions and customer base. In this initial post I would like to share my perspective on how the team and I are looking at this opportunity and our plan to help existing and new partners take advantage of what we’ve learned to date. Over time we’ll provide updates on the tools available to help ISV’s from both a technical and go-to-market perspective. Note this new series of blog posts will complement the ongoing AI blog series, and not replace it.

      Opportunity

      As we work with customers and partners it is clear the opportunity for LOB ISVs continues to grow.  Whether it’s by industry or horizontal solutions, migrating customers from existing on-premise solutions to the cloud or expanding a customer’s cloud portfolio, or helping customers leverage the new generation of building blocks, the opportunity for ISVs has never been greater.  At the same time, there are many ISVs that are transforming their business by leveraging new technologies and looking to expand their footprint. In all these cases we want to add the Dynamics 365 and Power platforms into the suite of tools that Microsoft can offer to help ISVs in their efforts. Microsoft at its core is a platform company and the transition of the Dynamics 365 family into a SaaS platform and a set of SaaS services that can be customized nicely complements Azure and Office to provide the only integrated set of platform offerings from IaaS to PaaS to SaaS that ISVs can leverage in their work.

      The focus of our efforts in this area will be to ensure the LOB ecosystem is able to leverage any aspect of the platform that might be useful. Whether that is the Common Data Model and Common Data Service, the Power platform including Flow, PowerApps and Power BI, or the Dynamics 365 applications (sales, customer service, finance and operations, etc.). There are even opportunities to leverage the new generation of building blocks including AI and Mixed Reality.

      As an organization focusing on ISVs, we have three core focus areas: 1) find and partner with ISVs that will help our mutual customers be successful, 2) minimize the friction to onboarding to Dynamics 365, Power platform and AppSource; and 3) help ISVs that build on the platform grow their business. We believe there are clear benefits for ISVs who work with us in this area that include:

      • Expanding your Business by leveraging the Microsoft Partner ecosystem and our best in class sales and go-to-market support.
        1. Promote your app on AppSource to reach up to 100 million commercial active users.
        2. Join a global community that connects you to the relationships, insights, tools, resources, and programs to expand customer reach.
        3. Leverage Microsoft co-selling programs and resources to drive new revenue growth.
      • Building Modern Cloud LOB Solutions on Microsoft’s Azure, Office 365 and Dynamics 365 offerings, and leverage the Microsoft Power platform and new technology building blocks.
        1. Innovate with a common data platform that brings together your customers’ data with unique Microsoft data assets.
        2. Enable app development from citizen to professional developers through the Power Application tools.
        3. Leverage existing solution frameworks in Marketing, Sales, Service, Commerce, Operations, Finance, Talent (MSSCOFT).
        4. Powerful new building blocks – AI, Mixed Reality, IoT.
      • Getting to Market Faster on a trusted global platform with familiar tools, unique data assets and industry accelerators.
        1. Take advantage of your team’s existing knowledge of Azure and Office 365, as well as vertical accelerators.
        2. Protect and govern your sensitive data using Azure’s industry-leading security and the most comprehensive compliance offerings.
        3. Our global infrastructure spans more than 100 highly secure facilities worldwide while meeting local data residency needs.

      Using the Dynamics 365 and Power Platforms

      As we work with LOB ISVs that are currently leveraging the Dynamics 365 and Power platforms there are a set of common patterns we are seeing in terms of how they leverage these offerings:

      • Build: There are ISVs leveraging the Power platform (e.g. PowerApps, Power BI, etc.) in the creation of their SaaS offerings. In this pattern ISVs build unique horizontal and/or vertical solutions that have their own user experience and standalone functionality.
      • Embed: There is another set of ISVs we see building on the Dynamics 365 first-party apps (e.g. Dynamics 365 for Sales) to create industry focused apps. They have a high-level dependency on Dynamics 365 and leverages the same UI model as the app they build on. Often, they also build on the Power platform to create additional functionality/capability.
      • Connect: The third pattern we see is when ISVs build stand-alone experiences that connect into Dynamics 365 or the Power platform. These connected offerings enhance the Dynamics 365 capabilities by providing additional capabilities that are not natively part of the platform. These offerings have limited dependency on the platform and have their own user experience.

      ISV Success

      At the end of the day, any platform is judged by the health of its ecosystem. With Dynamics 365 and the Power platform providing a rich set of capabilities for ISVs to leverage, we are focused on making these tools a successful part of our ecosystem offerings. The team and I will be the champion for LOB ISVs and developers in this space, representing their unique needs inside of Microsoft and in the broader community. I look at this extended remit as an opportunity to knit together the work of our ISV focused teams, programs and tools with a goal of creating a unified frictionless experience for our partners. By working to ensure ISVs can quickly build quality applications, with the right knowledge and guidance, as well as have access to GTM programs that enable ISVs to grow their overall business we want to be a great partner for the ISV community. I will leverage my experience from OEM and DX while focusing on unique opportunities for Dynamics 365.

      This week I will also be attending CES 2019 through Wednesday in Las Vegas talking with companies about AI, but also available to any LOB ISVs that would like to meet. I will also be at Mobile World Congress 2019 in February as well.

      Cheers,

      Guggs

       


      Windows 10 SDK Preview Build 18309 available now!

      $
      0
      0

      Today, we released a new Windows 10 Preview Build of the SDK to be used in conjunction with Windows 10 Insider Preview (Build 18309 or greater). The Preview SDK Build 18309 contains bug fixes and under development changes to the API surface area.

      The Preview SDK can be downloaded from developer section on Windows Insider.

      For feedback and updates to the known issues, please see the developer forum. For new developer feature requests, head over to our Windows Platform UserVoice

      Things to note:

      • This build works in conjunction with previously released SDKs and Visual Studio 2017. You can install this SDK and still also continue to submit your apps that target Windows 10 build 1809 or earlier to the Microsoft Store.
      • The Windows SDK will now formally only be supported by Visual Studio 2017 and greater. You can download Visual Studio 2017 here.
      • This build of the Windows SDK will install on Windows 10 Insider Preview builds and supported Windows operating systems.
      • In order to assist with script access to the SDK, the ISO will also be able to be accessed through the following URL: https://go.microsoft.com/fwlink/?prd=11966&pver=1.0&plcid=0x409&clcid=0x409&ar=Flight&sar=Sdsurl&o1=18309 once the static URL is published.

      Tools Updates

      Message Compiler (mc.exe)

      • The “-mof” switch (to generate XP-compatible ETW helpers) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated ETW helpers to expect Vista or later.
      • The “-A” switch (to generate .BIN files using ANSI encoding instead of Unicode) is considered to be deprecated and will be removed in a future version of mc.exe. Removing this switch will cause the generated .BIN files to use Unicode string encoding.
      • The behavior of the “-A” switch has changed. Prior to Windows 1607 Anniversary Update SDK, when using the -A switch, BIN files were encoded using the build system’s ANSI code page. In the Windows 1607 Anniversary Update SDK, mc.exe’s behavior was inadvertently changed to encode BIN files using the build system’s OEM code page. In the 19H1 SDK, mc.exe’s previous behavior has been restored and it now encodes BIN files using the build system’s ANSI code page. Note that the -A switch is deprecated, as ANSI-encoded BIN files do not provide a consistent user experience in multi-lingual systems.

      Breaking Changes

      Change to effect graph of the AcrylicBrush

      In this Preview SDK, we’ll be adding a blend mode to the effect graph of the AcrylicBrush called Luminosity. This blend mode will ensure that shadows do not appear behind acrylic surfaces without a cutout. We will also be exposing a LuminosityBlendOpacity API available for tweaking that allows for more AcrylicBrush customization.

      By default, for those that have not specified any LuminosityBlendOpacity on their AcrylicBrushes, we have implemented some logic to ensure that the Acrylic will look as similar as it can to current 1809 acrylics. Please note that we will be updating our default brushes to account for this recipe change.

      TraceLoggingProvider.h  / TraceLoggingWrite

      Events generated by TraceLoggingProvider.h (e.g. via TraceLoggingWrite macros) will now always have Id and Version set to 0.

      Previously, TraceLoggingProvider.h would assign IDs to events at link time. These IDs were unique within a DLL or EXE, but changed from build to build and from module to module.

      API Updates, Additions and Removals

      Additions:

      
      namespace Windows.AI.MachineLearning {
        public sealed class LearningModelSession : IClosable {
          public LearningModelSession(LearningModel model, LearningModelDevice deviceToRunOn, LearningModelSessionOptions learningModelSessionOptions);
        }
        public sealed class LearningModelSessionOptions
        public sealed class TensorBoolean : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorBoolean CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorBoolean CreateFromShapeArrayAndDataArray(long[] shape, bool[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorDouble : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorDouble CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorDouble CreateFromShapeArrayAndDataArray(long[] shape, double[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorFloat : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorFloat CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorFloat CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorFloat16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorFloat16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorFloat16Bit CreateFromShapeArrayAndDataArray(long[] shape, float[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, short[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
         public static TensorInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, int[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, long[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorString : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorString CreateFromShapeArrayAndDataArray(long[] shape, string[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt16Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt16Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt16Bit CreateFromShapeArrayAndDataArray(long[] shape, ushort[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt32Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt32Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt32Bit CreateFromShapeArrayAndDataArray(long[] shape, uint[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt64Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt64Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt64Bit CreateFromShapeArrayAndDataArray(long[] shape, ulong[] data);
          IMemoryBufferReference CreateReference();
        }
        public sealed class TensorUInt8Bit : IClosable, ILearningModelFeatureValue, IMemoryBuffer, ITensor {
          void Close();
          public static TensorUInt8Bit CreateFromBuffer(long[] shape, IBuffer buffer);
          public static TensorUInt8Bit CreateFromShapeArrayAndDataArray(long[] shape, byte[] data);
          IMemoryBufferReference CreateReference();
        }
      }
      namespace Windows.ApplicationModel {
        public sealed class Package {
          StorageFolder EffectiveLocation { get; }
          StorageFolder MutableLocation { get; }
        }
      }
      namespace Windows.ApplicationModel.AppService {
        public sealed class AppServiceConnection : IClosable {
          public static IAsyncOperation<StatelessAppServiceResponse> SendStatelessMessageAsync(AppServiceConnection connection, RemoteSystemConnectionRequest connectionRequest, ValueSet message);
        }
        public sealed class AppServiceTriggerDetails {
          string CallerRemoteConnectionToken { get; }
        }
        public sealed class StatelessAppServiceResponse
        public enum StatelessAppServiceResponseStatus
      }
      namespace Windows.ApplicationModel.Background {
        public sealed class ConversationalAgentTrigger : IBackgroundTrigger
      }
      namespace Windows.ApplicationModel.Calls {
        public sealed class PhoneLine {
          string TransportDeviceId { get; }
          void EnableTextReply(bool value);
        }
        public enum PhoneLineTransport {
          Bluetooth = 2,
        }
        public sealed class PhoneLineTransportDevice
      }
      namespace Windows.ApplicationModel.Calls.Background {
        public enum PhoneIncomingCallDismissedReason
        public sealed class PhoneIncomingCallDismissedTriggerDetails
        public enum PhoneTriggerType {
         IncomingCallDismissed = 6,
        }
      }
      namespace Windows.ApplicationModel.Calls.Provider {
        public static class PhoneCallOriginManager {
          public static bool IsSupported { get; }
        }
      }
      namespace Windows.ApplicationModel.ConversationalAgent {
        public sealed class ConversationalAgentSession : IClosable
        public sealed class ConversationalAgentSessionInterruptedEventArgs
        public enum ConversationalAgentSessionUpdateResponse
        public sealed class ConversationalAgentSignal
        public sealed class ConversationalAgentSignalDetectedEventArgs
        public enum ConversationalAgentState
        public sealed class ConversationalAgentSystemStateChangedEventArgs
        public enum ConversationalAgentSystemStateChangeType
      }
      namespace Windows.ApplicationModel.Preview.Holographic {
        public sealed class HolographicKeyboardPlacementOverridePreview
      }
      namespace Windows.ApplicationModel.Resources {
        public sealed class ResourceLoader {
          public static ResourceLoader GetForUIContext(UIContext context);
        }
      }
      namespace Windows.ApplicationModel.Resources.Core {
        public sealed class ResourceCandidate {
          ResourceCandidateKind Kind { get; }
        }
        public enum ResourceCandidateKind
        public sealed class ResourceContext {
          public static ResourceContext GetForUIContext(UIContext context);
        }
      }
      namespace Windows.ApplicationModel.UserActivities {
        public sealed class UserActivityChannel {
          public static UserActivityChannel GetForUser(User user);
        }
      }
      namespace Windows.Devices.Bluetooth.GenericAttributeProfile {
        public enum GattServiceProviderAdvertisementStatus {
          StartedWithoutAllAdvertisementData = 4,
        }
        public sealed class GattServiceProviderAdvertisingParameters {
          IBuffer ServiceData { get; set; }
        }
      }
      namespace Windows.Devices.Enumeration {
        public enum DevicePairingKinds : uint {
          ProvidePasswordCredential = (uint)16,
        }
        public sealed class DevicePairingRequestedEventArgs {
          void AcceptWithPasswordCredential(PasswordCredential passwordCredential);
        }
      }
      namespace Windows.Devices.Input {
        public sealed class PenDevice
      }
      namespace Windows.Devices.PointOfService {
        public sealed class JournalPrinterCapabilities : ICommonPosPrintStationCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class JournalPrintJob : IPosPrinterJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
        }
        public sealed class PaymentDevice : IClosable
        public sealed class PaymentDeviceCapabilities
        public sealed class PaymentDeviceConfiguration
        public sealed class PaymentDeviceGetConfigurationResult
        public sealed class PaymentDeviceOperationResult
        public sealed class PaymentDeviceTransactionRequest
        public sealed class PaymentDeviceTransactionResult
        public sealed class PaymentMethod
        public enum PaymentMethodKind
        public enum PaymentOperationStatus
        public enum PaymentUserResponse
        public sealed class PosPrinter : IClosable {
          IVectorView<uint> SupportedBarcodeSymbologies { get; }
          PosPrinterFontProperty GetFontProperty(string typeface);
        }
        public sealed class PosPrinterFontProperty
        public sealed class PosPrinterPrintOptions
        public sealed class ReceiptPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class ReceiptPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
          void StampPaper();
        }
        public struct SizeUInt32
        public sealed class SlipPrinterCapabilities : ICommonPosPrintStationCapabilities, ICommonReceiptSlipCapabilities {
          bool IsReversePaperFeedByLineSupported { get; }
          bool IsReversePaperFeedByMapModeUnitSupported { get; }
          bool IsReverseVideoSupported { get; }
          bool IsStrikethroughSupported { get; }
          bool IsSubscriptSupported { get; }
          bool IsSuperscriptSupported { get; }
        }
        public sealed class SlipPrintJob : IPosPrinterJob, IReceiptOrSlipJob {
          void FeedPaperByLine(int lineCount);
          void FeedPaperByMapModeUnit(int distance);
          void Print(string data, PosPrinterPrintOptions printOptions);
        }
      }
      namespace Windows.Devices.PointOfService.Provider {
        public sealed class PaymentDeviceCloseTerminalRequest
        public sealed class PaymentDeviceCloseTerminalRequestEventArgs
        public sealed class PaymentDeviceConnection : IClosable
        public sealed class PaymentDeviceConnectionTriggerDetails
        public sealed class PaymentDeviceConnectorInfo
        public sealed class PaymentDeviceGetTerminalsRequest
        public sealed class PaymentDeviceGetTerminalsRequestEventArgs
        public sealed class PaymentDeviceOpenTerminalRequest
        public sealed class PaymentDeviceOpenTerminalRequestEventArgs
        public sealed class PaymentDevicePaymentAuthorizationRequest
        public sealed class PaymentDevicePaymentAuthorizationRequestEventArgs
        public sealed class PaymentDevicePaymentRequest
        public sealed class PaymentDevicePaymentRequestEventArgs
        public sealed class PaymentDeviceReadCapabilitiesRequest
        public sealed class PaymentDeviceReadCapabilitiesRequestEventArgs
        public sealed class PaymentDeviceReadConfigurationRequest
        public sealed class PaymentDeviceReadConfigurationRequestEventArgs
        public sealed class PaymentDeviceRefundRequest
        public sealed class PaymentDeviceRefundRequestEventArgs
        public sealed class PaymentDeviceVoidTokenRequest
        public sealed class PaymentDeviceVoidTokenRequestEventArgs
        public sealed class PaymentDeviceVoidTransactionRequest
        public sealed class PaymentDeviceVoidTransactionRequestEventArgs
        public sealed class PaymentDeviceWriteConfigurationRequest
        public sealed class PaymentDeviceWriteConfigurationRequestEventArgs
      }
      namespace Windows.Globalization {
        public sealed class CurrencyAmount
      }
      namespace Windows.Graphics.DirectX {
        public enum DirectXPrimitiveTopology
      }
      namespace Windows.Graphics.Holographic {
        public sealed class HolographicCamera {
          HolographicViewConfiguration ViewConfiguration { get; }
        }
        public sealed class HolographicDisplay {
          HolographicViewConfiguration TryGetViewConfiguration(HolographicViewConfigurationKind kind);
        }
        public sealed class HolographicViewConfiguration
        public enum HolographicViewConfigurationKind
      }
      namespace Windows.Management.Deployment {
        public enum AddPackageByAppInstallerOptions : uint {
          LimitToExistingPackages = (uint)512,
        }
        public enum DeploymentOptions : uint {
          RetainFilesOnFailure = (uint)2097152,
        }
      }
      namespace Windows.Media.Devices {
        public sealed class InfraredTorchControl
        public enum InfraredTorchMode
        public sealed class VideoDeviceController : IMediaDeviceController {
          InfraredTorchControl InfraredTorchControl { get; }
        }
      }
      namespace Windows.Media.Miracast {
        public sealed class MiracastReceiver
        public sealed class MiracastReceiverApplySettingsResult
        public enum MiracastReceiverApplySettingsStatus
        public enum MiracastReceiverAuthorizationMethod
        public sealed class MiracastReceiverConnection : IClosable
        public sealed class MiracastReceiverConnectionCreatedEventArgs
        public sealed class MiracastReceiverCursorImageChannel
        public sealed class MiracastReceiverCursorImageChannelSettings
        public sealed class MiracastReceiverDisconnectedEventArgs
        public enum MiracastReceiverDisconnectReason
        public sealed class MiracastReceiverGameControllerDevice
        public enum MiracastReceiverGameControllerDeviceUsageMode
        public sealed class MiracastReceiverInputDevices
        public sealed class MiracastReceiverKeyboardDevice
        public enum MiracastReceiverListeningStatus
        public sealed class MiracastReceiverMediaSourceCreatedEventArgs
        public sealed class MiracastReceiverSession : IClosable
        public sealed class MiracastReceiverSessionStartResult
        public enum MiracastReceiverSessionStartStatus
        public sealed class MiracastReceiverSettings
        public sealed class MiracastReceiverStatus
        public sealed class MiracastReceiverStreamControl
        public sealed class MiracastReceiverVideoStreamSettings
        public enum MiracastReceiverWiFiStatus
        public sealed class MiracastTransmitter
        public enum MiracastTransmitterAuthorizationStatus
      }
      namespace Windows.Networking.Connectivity {
        public enum NetworkAuthenticationType {
          Wpa3 = 10,
          Wpa3Sae = 11,
        }
      }
      namespace Windows.Networking.NetworkOperators {
        public sealed class ESim {
          ESimDiscoverResult Discover();
          ESimDiscoverResult Discover(string serverAddress, string matchingId);
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync();
          IAsyncOperation<ESimDiscoverResult> DiscoverAsync(string serverAddress, string matchingId);
        }
        public sealed class ESimDiscoverEvent
        public sealed class ESimDiscoverResult
        public enum ESimDiscoverResultKind
      }
      namespace Windows.Networking.PushNotifications {
        public static class PushNotificationChannelManager {
          public static event EventHandler<PushNotificationChannelsRevokedEventArgs> ChannelsRevoked;
        }
        public sealed class PushNotificationChannelsRevokedEventArgs
      }
      namespace Windows.Perception.People {
        public sealed class EyesPose
        public enum HandJointKind
        public sealed class HandMeshObserver
        public struct HandMeshVertex
        public sealed class HandMeshVertexState
        public sealed class HandPose
        public struct JointPose
        public enum JointPoseAccuracy
      }
      namespace Windows.Perception.Spatial {
        public struct SpatialRay
      }
      namespace Windows.Perception.Spatial.Preview {
        public sealed class SpatialGraphInteropFrameOfReferencePreview
        public static class SpatialGraphInteropPreview {
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem);
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition);
          public static SpatialGraphInteropFrameOfReferencePreview TryCreateFrameOfReference(SpatialCoordinateSystem coordinateSystem, Vector3 relativePosition, Quaternion relativeOrientation);
        }
      }
      namespace Windows.Security.DataProtection {
        public enum UserDataAvailability
        public sealed class UserDataAvailabilityStateChangedEventArgs
        public sealed class UserDataBufferUnprotectResult
        public enum UserDataBufferUnprotectStatus
        public sealed class UserDataProtectionManager
        public sealed class UserDataStorageItemProtectionInfo
        public enum UserDataStorageItemProtectionStatus
      }
      namespace Windows.Storage.AccessCache {
        public static class StorageApplicationPermissions {
          public static StorageItemAccessList GetFutureAccessListForUser(User user);
          public static StorageItemMostRecentlyUsedList GetMostRecentlyUsedListForUser(User user);
        }
      }
      namespace Windows.Storage.Pickers {
        public sealed class FileOpenPicker {
          User User { get; }
          public static FileOpenPicker CreateForUser(User user);
        }
        public sealed class FileSavePicker {
          User User { get; }
          public static FileSavePicker CreateForUser(User user);
        }
        public sealed class FolderPicker {
          User User { get; }
          public static FolderPicker CreateForUser(User user);
        }
      }
      namespace Windows.System {
        public sealed class DispatcherQueue {
          bool HasThreadAccess { get; }
        }
        public enum ProcessorArchitecture {
          Arm64 = 12,
          X86OnArm64 = 14,
        }
      }
      namespace Windows.System.Profile {
        public static class AppApplicability
        public sealed class UnsupportedAppRequirement
        public enum UnsupportedAppRequirementReasons : uint
      }
      namespace Windows.System.RemoteSystems {
        public sealed class RemoteSystem {
          User User { get; }
          public static RemoteSystemWatcher CreateWatcherForUser(User user);
          public static RemoteSystemWatcher CreateWatcherForUser(User user, IIterable<IRemoteSystemFilter> filters);
        }
        public sealed class RemoteSystemApp {
          string ConnectionToken { get; }
          User User { get; }
        }
        public sealed class RemoteSystemConnectionRequest {
          string ConnectionToken { get; }
          public static RemoteSystemConnectionRequest CreateFromConnectionToken(string connectionToken);
          public static RemoteSystemConnectionRequest CreateFromConnectionTokenForUser(User user, string connectionToken);
        }
        public sealed class RemoteSystemWatcher {
          User User { get; }
        }
      }
      namespace Windows.UI.Composition {
        public enum CompositionBitmapInterpolationMode {
          MagLinearMinLinearMipLinear = 2,
          MagLinearMinLinearMipNearest = 3,
          MagLinearMinNearestMipLinear = 4,
          MagLinearMinNearestMipNearest = 5,
          MagNearestMinLinearMipLinear = 6,
          MagNearestMinLinearMipNearest = 7,
          MagNearestMinNearestMipLinear = 8,
          MagNearestMinNearestMipNearest = 9,
        }
        public sealed class CompositionGraphicsDevice : CompositionObject {
          CompositionMipmapSurface CreateMipmapSurface(SizeInt32 sizePixels, DirectXPixelFormat pixelFormat, DirectXAlphaMode alphaMode);
        }
        public sealed class CompositionMipmapSurface : CompositionObject, ICompositionSurface
        public sealed class CompositionProjectedShadow : CompositionObject
        public sealed class CompositionProjectedShadowCaster : CompositionObject
        public sealed class CompositionProjectedShadowCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowCaster>
        public enum CompositionProjectedShadowDrawOrder
        public sealed class CompositionProjectedShadowLegacyCaster : CompositionObject
        public sealed class CompositionProjectedShadowLegacyCasterCollection : CompositionObject, IIterable<CompositionProjectedShadowLegacyCaster>
       public sealed class CompositionProjectedShadowLegacyReceiver : CompositionObject
        public sealed class CompositionProjectedShadowLegacyReceiverUnorderedCollection : CompositionObject, IIterable<CompositionProjectedShadowLegacyReceiver>
        public sealed class CompositionProjectedShadowLegacyScene : CompositionObject
        public enum CompositionProjectedShadowPolicy
        public sealed class CompositionProjectedShadowReceiver : CompositionObject, IIterable<CompositionProjectedShadow>
        public sealed class CompositionRadialGradientBrush : CompositionGradientBrush
        public class CompositionTransform : CompositionObject
        public sealed class CompositionVisualSurface : CompositionObject, ICompositionSurface
        public sealed class Compositor : IClosable {
          CompositionProjectedShadow CreateProjectedShadow();
          CompositionProjectedShadowCaster CreateProjectedShadowCaster();
          CompositionProjectedShadowLegacyCaster CreateProjectedShadowLegacyCaster();
          CompositionProjectedShadowLegacyReceiver CreateProjectedShadowLegacyReceiver();
          CompositionProjectedShadowLegacyScene CreateProjectedShadowLegacyScene();
          CompositionRadialGradientBrush CreateRadialGradientBrush();
          CompositionVisualSurface CreateVisualSurface();
        }
        public interface ICompositorPartner_ProjectedShadow
        public interface ICompositorPartner_ProjectedShadowLegacy
        public interface IVisualElement
        public sealed class UIContentRoot
        public sealed class UIContext
        public class Visual : CompositionObject {
          CompositionProjectedShadowReceiver ReceivedShadows { get; }
        }
      }
      namespace Windows.UI.Composition.Interactions {
        public enum InteractionBindingAxisModes : uint
        public sealed class InteractionTracker : CompositionObject {
          public static void SetBindingMode(InteractionTracker boundTracker1, InteractionTracker boundTracker2, InteractionBindingAxisModes axisMode);
        }
        public sealed class InteractionTrackerCustomAnimationStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerIdleStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerInertiaStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public sealed class InteractionTrackerInteractingStateEnteredArgs {
          bool IsFromBinding { get; }
        }
        public class VisualInteractionSource : CompositionObject, ICompositionInteractionSource {
          public static VisualInteractionSource CreateFromIVisualElement(IVisualElement source);
        }
      }
      namespace Windows.UI.Composition.Scenes {
        public enum SceneAlphaMode
        public enum SceneAttributeSemantic
        public sealed class SceneBoundingBox : SceneObject
        public class SceneComponent : SceneObject
        public sealed class SceneComponentCollection : SceneObject, IIterable<SceneComponent>, IVector<SceneComponent>
        public enum SceneComponentType
        public class SceneMaterial : SceneObject
        public class SceneMaterialInput : SceneObject
        public sealed class SceneMesh : SceneObject
        public sealed class SceneMeshMaterialAttributeMap : SceneObject, IIterable<IKeyValuePair<string, SceneAttributeSemantic>>, IMap<string, SceneAttributeSemantic>
        public sealed class SceneMeshRendererComponent : SceneRendererComponent
        public sealed class SceneMetallicRoughnessMaterial : ScenePbrMaterial
        public sealed class SceneModelTransform : CompositionTransform
        public sealed class SceneNode : SceneObject
        public sealed class SceneNodeCollection : SceneObject, IIterable<SceneNode>, IVector<SceneNode>
        public class SceneObject : CompositionObject
        public class ScenePbrMaterial : SceneMaterial
        public class SceneRendererComponent : SceneComponent
        public sealed class SceneSurfaceMaterialInput : SceneMaterialInput
        public sealed class SceneVisual : ContainerVisual
        public enum SceneWrappingMode
      }
      namespace Windows.UI.Core {
        public sealed class CoreWindow : ICorePointerRedirector, ICoreWindow {
          UIContext UIContext { get; }
        }
      }
      namespace Windows.UI.Core.Preview {
        public sealed class CoreAppWindowPreview
      }
      namespace Windows.UI.Input {
        public class AttachableInputObject : IClosable
        public enum GazeInputAccessStatus
        public sealed class InputActivationListener : AttachableInputObject
        public sealed class InputActivationListenerActivationChangedEventArgs
        public enum InputActivationState
      }
      namespace Windows.UI.Input.Preview {
        public static class InputActivationListenerPreview
      }
      namespace Windows.UI.Input.Spatial {
        public sealed class SpatialInteractionManager {
          public static bool IsSourceKindSupported(SpatialInteractionSourceKind kind);
        }
        public sealed class SpatialInteractionSource {
          HandMeshObserver TryCreateHandMeshObserver();
          IAsyncOperation<HandMeshObserver> TryCreateHandMeshObserverAsync();
        }
        public sealed class SpatialInteractionSourceState {
          HandPose TryGetHandPose();
        }
        public sealed class SpatialPointerPose {
          EyesPose Eyes { get; }
          bool IsHeadCapturedBySystem { get; }
        }
      }
      namespace Windows.UI.Notifications {
        public sealed class ToastActivatedEventArgs {
          ValueSet UserInput { get; }
        }
        public sealed class ToastNotification {
          bool ExpiresOnReboot { get; set; }
        }
      }
      namespace Windows.UI.ViewManagement {
        public sealed class ApplicationView {
          string PersistedStateId { get; set; }
          UIContext UIContext { get; }
          WindowingEnvironment WindowingEnvironment { get; }
          public static void ClearAllPersistedState();
          public static void ClearPersistedState(string key);
          IVectorView<DisplayRegion> GetDisplayRegions();
        }
        public sealed class InputPane {
          public static InputPane GetForUIContext(UIContext context);
        }
        public sealed class UISettings {
          bool AutoHideScrollBars { get; }
          event TypedEventHandler<UISettings, UISettingsAutoHideScrollBarsChangedEventArgs> AutoHideScrollBarsChanged;
        }
        public sealed class UISettingsAutoHideScrollBarsChangedEventArgs
      }
      namespace Windows.UI.ViewManagement.Core {
        public sealed class CoreInputView {
          public static CoreInputView GetForUIContext(UIContext context);
        }
      }
      namespace Windows.UI.WindowManagement {
        public sealed class AppWindow
        public sealed class AppWindowChangedEventArgs
        public sealed class AppWindowClosedEventArgs
        public enum AppWindowClosedReason
        public sealed class AppWindowCloseRequestedEventArgs
        public sealed class AppWindowFrame
        public enum AppWindowFrameStyle
        public sealed class AppWindowPlacement
        public class AppWindowPresentationConfiguration
        public enum AppWindowPresentationKind
        public sealed class AppWindowPresenter
        public sealed class AppWindowTitleBar
        public sealed class AppWindowTitleBarOcclusion
        public enum AppWindowTitleBarVisibility
        public sealed class CompactOverlayPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class DefaultPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class DisplayRegion
        public sealed class FullScreenPresentationConfiguration : AppWindowPresentationConfiguration
        public sealed class WindowingEnvironment
        public sealed class WindowingEnvironmentAddedEventArgs
        public sealed class WindowingEnvironmentChangedEventArgs
        public enum WindowingEnvironmentKind
        public sealed class WindowingEnvironmentRemovedEventArgs
      }
      namespace Windows.UI.WindowManagement.Preview {
        public sealed class WindowManagementPreview
      }
      namespace Windows.UI.Xaml {
        public class UIElement : DependencyObject, IAnimationObject, IVisualElement {
          Vector3 ActualOffset { get; }
          Vector2 ActualSize { get; }
          Shadow Shadow { get; set; }
          public static DependencyProperty ShadowProperty { get; }
          UIContext UIContext { get; }
          XamlRoot XamlRoot { get; set; }
        }
        public class UIElementWeakCollection : IIterable<UIElement>, IVector<UIElement>
        public sealed class Window {
          UIContext UIContext { get; }
        }
        public sealed class XamlRoot
        public sealed class XamlRootChangedEventArgs
      }
      namespace Windows.UI.Xaml.Controls {
        public sealed class DatePickerFlyoutPresenter : Control {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class FlyoutPresenter : ContentControl {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class InkToolbar : Control {
          InkPresenter TargetInkPresenter { get; set; }
          public static DependencyProperty TargetInkPresenterProperty { get; }
        }
        public class MenuFlyoutPresenter : ItemsControl {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class RichEditBox : Control {
          void CopySelectionToClipboard();
          void CutSelectionToClipboard();
          void PasteFromClipboard();
        }
        public sealed class TimePickerFlyoutPresenter : Control {
          bool IsDefaultShadowEnabled { get; set; }
          public static DependencyProperty IsDefaultShadowEnabledProperty { get; }
        }
        public class TwoPaneView : Control
        public enum TwoPaneViewMode
        public enum TwoPaneViewPriority
        public enum TwoPaneViewTallModeConfiguration
        public enum TwoPaneViewWideModeConfiguration
      }
      namespace Windows.UI.Xaml.Controls.Maps {
        public sealed class MapControl : Control {
          bool CanTiltDown { get; }
          public static DependencyProperty CanTiltDownProperty { get; }
          bool CanTiltUp { get; }
          public static DependencyProperty CanTiltUpProperty { get; }
          bool CanZoomIn { get; }
          public static DependencyProperty CanZoomInProperty { get; }
          bool CanZoomOut { get; }
          public static DependencyProperty CanZoomOutProperty { get; }
        }
        public enum MapLoadingStatus {
          DownloadedMapsManagerUnavailable = 3,
        }
      }
      namespace Windows.UI.Xaml.Controls.Primitives {
        public sealed class AppBarTemplateSettings : DependencyObject {
          double NegativeCompactVerticalDelta { get; }
          double NegativeHiddenVerticalDelta { get; }
          double NegativeMinimalVerticalDelta { get; }
        }
        public sealed class CommandBarTemplateSettings : DependencyObject {
          double OverflowContentCompactYTranslation { get; }
          double OverflowContentHiddenYTranslation { get; }
          double OverflowContentMinimalYTranslation { get; }
        }
        public class FlyoutBase : DependencyObject {
          bool IsConstrainedToRootBounds { get; }
          bool ShouldConstrainToRootBounds { get; set; }
          public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
          XamlRoot XamlRoot { get; set; }
        }
        public sealed class Popup : FrameworkElement {
          bool IsConstrainedToRootBounds { get; }
          bool ShouldConstrainToRootBounds { get; set; }
          public static DependencyProperty ShouldConstrainToRootBoundsProperty { get; }
        }
      }
      namespace Windows.UI.Xaml.Core.Direct {
        public enum XamlPropertyIndex {
          AppBarTemplateSettings_NegativeCompactVerticalDelta = 2367,
          AppBarTemplateSettings_NegativeHiddenVerticalDelta = 2368,
          AppBarTemplateSettings_NegativeMinimalVerticalDelta = 2369,
          CommandBarTemplateSettings_OverflowContentCompactYTranslation = 2384,
          CommandBarTemplateSettings_OverflowContentHiddenYTranslation = 2385,
          CommandBarTemplateSettings_OverflowContentMinimalYTranslation = 2386,
          FlyoutBase_ShouldConstrainToRootBounds = 2378,
          FlyoutPresenter_IsDefaultShadowEnabled = 2380,
          MenuFlyoutPresenter_IsDefaultShadowEnabled = 2381,
          Popup_ShouldConstrainToRootBounds = 2379,
          ThemeShadow_Receivers = 2279,
          UIElement_ActualOffset = 2382,
          UIElement_ActualSize = 2383,
          UIElement_Shadow = 2130,
        }
        public enum XamlTypeIndex {
          ThemeShadow = 964,
        }
      }
      namespace Windows.UI.Xaml.Documents {
        public class TextElement : DependencyObject {
          XamlRoot XamlRoot { get; set; }
        }
      }
      namespace Windows.UI.Xaml.Hosting {
        public sealed class ElementCompositionPreview {
          public static UIElement GetAppWindowContent(AppWindow appWindow);
          public static void SetAppWindowContent(AppWindow appWindow, UIElement xamlContent);
        }
      }
      namespace Windows.UI.Xaml.Input {
        public sealed class FocusManager {
          public static object GetFocusedElement(XamlRoot xamlRoot);
        }
        public class StandardUICommand : XamlUICommand {
          StandardUICommandKind Kind { get; set; }
        }
      }
      namespace Windows.UI.Xaml.Media {
        public class AcrylicBrush : XamlCompositionBrushBase {
          IReference<double> TintLuminosityOpacity { get; set; }
          public static DependencyProperty TintLuminosityOpacityProperty { get; }
        }
        public class Shadow : DependencyObject
        public class ThemeShadow : Shadow
        public sealed class VisualTreeHelper {
          public static IVectorView<Popup> GetOpenPopupsForXamlRoot(XamlRoot xamlRoot);
        }
      }
      namespace Windows.UI.Xaml.Media.Animation {
        public class GravityConnectedAnimationConfiguration : ConnectedAnimationConfiguration {
          bool IsShadowEnabled { get; set; }
        }
      }
      namespace Windows.Web.Http {
        public sealed class HttpClient : IClosable, IStringable {
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryDeleteAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryGetAsync(Uri uri, HttpCompletionOption completionOption);
          IAsyncOperationWithProgress<HttpGetBufferResult, HttpProgress> TryGetBufferAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetInputStreamResult, HttpProgress> TryGetInputStreamAsync(Uri uri);
          IAsyncOperationWithProgress<HttpGetStringResult, HttpProgress> TryGetStringAsync(Uri uri);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPostAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TryPutAsync(Uri uri, IHttpContent content);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request);
          IAsyncOperationWithProgress<HttpRequestResult, HttpProgress> TrySendRequestAsync(HttpRequestMessage request, HttpCompletionOption completionOption);
        }
        public sealed class HttpGetBufferResult : IClosable, IStringable
        public sealed class HttpGetInputStreamResult : IClosable, IStringable
        public sealed class HttpGetStringResult : IClosable, IStringable
        public sealed class HttpRequestResult : IClosable, IStringable
      }
      namespace Windows.Web.Http.Filters {
        public sealed class HttpBaseProtocolFilter : IClosable, IHttpFilter {
          User User { get; }
          public static HttpBaseProtocolFilter CreateForUser(User user);
        }
      }
      
      

      The post Windows 10 SDK Preview Build 18309 available now! appeared first on Windows Developer Blog.

      .NET Framework January 2019 Security and Quality Rollup

      $
      0
      0

      Today, we are releasing the January 2019 Security and Quality Rollup.

      Security

      CVE-2019-0545 – Windows Security Feature Bypass Vulnerability

      This security update resolves a vulnerability in Microsoft .NET Framework that may cause an information disclosure that allows bypassing Cross-origin Resource Sharing (CORS) configurations. An attacker who successfully exploits the vulnerability could retrieve from a web application content that’s normally restricted. This security update addresses the vulnerability by enforcing CORS configuration to prevent its bypass.

      CVE-2019-0545

      Getting the Update

      The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, Microsoft Update Catalog, and Docker.

      Microsoft Update Catalog

      You can get the update via the Microsoft Update Catalog. For Windows 10, .NET Framework updates are part of the Windows 10 Monthly Rollup.

      The following table is for Windows 10 and Windows Server 2016+ versions.

      Product Version Security and Quality Rollup KB
      Windows 10 1809 (October 2018 Update)
      Windows Server 2019

      Catalog
      4480056
      .NET Framework 3.5, 4.7.2 4480056
      Windows 10 1803 (April 2018 Update) Catalog
      4480966
      .NET Framework 3.5, 4.7.2 4480966
      Windows 10 1709 (Fall Creators Update) Catalog
      4480978
      .NET Framework 3.5, 4.7.1, 4.7.2 4480978
      Windows 10 1703 (Creators Update) Catalog
      4480973
      .NET Framework 3.5, 4.7, 4.7.1, 4.7.2 4480973
      Windows 10 1607 (Anniversary Update)
      Windows Server 2016
      Catalog
      4480961
      .NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2< 4480961
      Windows 10 1507 Catalog
      4480962
      .NET Framework 3.5, 4.6, 4.6.1, 4.6.2< 4480962

      The following table is for earlier Windows and Windows Server versions.

       

      Product Version Security and Quality Rollup KB Security Only Update KB
      Windows 8.1
      Windows RT 8.1
      Windows Server 2012 R2
      Catalog
      4481485
      Catalog
      4481484
      .NET Framework 3.5 Catalog
      4480064
      Catalog
      4480086
      .NET Framework 4.5.2 Catalog
      4480057
      Catalog
      4480074
      .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
      4480054
      Catalog
      4480071
      Windows Server 2012 Catalog
      4481482
      Catalog
      4481483
      .NET Framework 3.5 Catalog
      4480061
      Catalog
      4480083
      .NET Framework 4.5.2 Catalog
      4480058
      Catalog
      4480075
      .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
      4480051

      Catalog
      4480070
      Windows 7 SP1
      Windows Server 2008 R2 SP1
      Catalog
      4481480
      Catalog
      4481481
      .NET Framework 3.5.1 Catalog
      4480063
      Catalog
      4480085
      .NET Framework 4.5.2 Catalog
      4480059
      Catalog
      4480076
      .NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog
      4480055
      Catalog
      4480072
      Windows Server 2008 Catalog
      4481486
      Catalog
      4481487
      .NET Framework 2.0, 3.0 Catalog
      4480062
      Catalog
      4480084
      .NET Framework 4.5.2 Catalog
      4480059
      Catalog
      4480076
      .NET Framework 4.6 Catalog
      4480055
      Catalog
      4480072

      Docker Images

      We are updating the following .NET Framework Docker images for today’s release:

      Note: Look at the “Tags” view in each repository to see the updated Docker image tags.

      Note: Significant changes have been made with Docker images recently. Please look at .NET Docker Announcements for more information.

      Previous Monthly Rollups

      The last few .NET Framework Monthly updates are listed below for your convenience:

      .NET Core January 2019 Updates – 2.1.7 and 2.2.1

      $
      0
      0

      Today, we are releasing the .NET Core January 2019 Update. These updates contain security and reliability fixes.

      Security

      CVE-2019-0545: .NET Core Information Disclosure Vulnerability

      The security update addresses the vulnerability by enforcing Cross-origin Resource Sharing (CORS) configuration to prevent its bypass in .NET Core 2.1 and 2.2. An attacker who successfully exploited the vulnerability could retrieve content, that is normally restricted, from a web application.

      CVE-2019-0548: ASP.NET Core Denial Of Service Vulnerability

      This security vulnerability exists in ASP.NET Core 1.0, 1.1, 2.1 and 2.2. If an application is hosted on Internet Information Server (IIS) a remote unauthenticated attacker can use a specially crafted request to cause a Denial of Service.

      CVE-2019-0564: ASP.NET Core Denial Of Service Vulnerability

      This security vulnerability exists in ASP.NET Core 1.0, 1.1, 2.1 and 2.2. If an application is hosted on Internet Information Server (IIS) a remote unauthenticated attacker can use a specially crafted request to cause a Denial of Service.

      CVE-2018-8416: .NET Core Tampering Vulnerability

      A security vulnerability exists wherein .NET Core 2.1 improperly handles specially crafted files. An attacker who successfully exploited this vulnerability could write arbitrary files and directories to certain locations on a vulnerable system. However, an attacker would have limited control over the destination of the files and directories.

      To exploit the vulnerability, an attacker must send a specially crafted file to a vulnerable system.

      Windows ARM support

      This release includes the first availability of .NET Core for Windows Server, version 1809 ARM32. Runtime zips can be found on the downloads page. The SDK zip is expected to be live on the 9th and this note will be updated when that happens.

      Getting the Update

      The latest .NET Core updates are available on the .NET Core download page. This update is included in the Visual Studio 15.9.5 update, which is also releasing today.

      See the .NET Core release notes ( 2.1.7 | 2.2.1 ) for details on the release including a detailed commit list and affected files.

      Docker Images

      The .NET Core Docker images have been updated for this release. Details on our Docker versioning and how to work with the images can be seen in “Staying up-to-date with .NET Container Images”.

      microsoft/dotnet
      microsoft/dotnet-samples
      microsoft/aspnetcore

      Azure App Services deployment

      Deployment of .NET Core 2.1.7 and 2.2.1 to Azure App Services has begun. The West Central US, North Central US and West US 2 regions are live now. Remaining regions will be updated over the next few days.

      CES 2019—new commercial PCs and displays unveiled by Microsoft partners Dell, HP, and Lenovo

      Announcing the general availability of Azure Data Box Disk

      $
      0
      0

      Since our preview announcement, hundreds of customers have been moving recurring workloads, media captures from automobiles, incremental transfers for ongoing backups, and archives from remote/office branch offices (ROBOs) to Microsoft Azure. We’re excited to announce the general availability of Azure Data Box Disk, an SSD-based solution for offline data transfer to Azure. Data Box Disk is now available in the US, EU, Canada, and Australia, with more country/regions to be added over time. Also, be sure not to miss the announcement of the public preview for Blob Storage on Azure Data Box below!

      Top three reasons customers use Data Box Disk

      1. Easy to order and use: Each disk is an 8 TB SSD. You can easily order a pack(s) of up to five disks from the Azure portal for a total capacity of 40 TB per order. The small form-factor provides the right balance of capacity and portability to collect and transport data in a variety of use cases. Support is available for Windows and Linux.
      2. Fast data transfer: These SSD disks copy data up to USB 3.1 speeds and support the SATA II and III interfaces. Simply mount the disks as drives and use any tool of choice such as Robocopy, or just drag-and-drop to copy files to the disks.
      3. Security: The disks are encrypted using 128-bit AES encryption and can be locked with your custom passkeys. After the data upload to Azure is complete, the disks are wiped clean in accordance with NIST 800 88-r1 standards.

      Get started now

      Data Box Disk is currently available in the US, EU, Australia, and Canada, and we will continue to expand to more county/regions in the coming months. To get started, refer to the online tutorial and order your Data Box Disk today. A complete list of supported operating systems can be found in our documentation, “Azure Data Box Disk system requirements.” For a deep dive on the toolset, see our documentation “Tutorial: Unpack, connect, and unlock Azure Data Box Disk.”

      Announcing Blob Storage on Azure Data Box – Public preview now available

      We also are launching the public preview of Azure Data Box Blob Storage. When enabled, this feature will allow you to copy data to Blob Storage on Data Box using blob service REST APIs. We are working with leading partners in the space to ensure you can use your favorite data copy tools.

      For more details on using Blob Storage with Data Box, see our official documentation for “Azure Data Box Blob Storage requirements” and a tutorial on copying data via Azure Data Box Blob Storage REST APIs.

      Thank you to everyone who participated in the preview of Azure Data Box Disk, and to those continuing to participate in previews for other products in the Data Box family including Data Box Heavy, Data Box Edge, and Data Box Gateway! In the coming months, we plan to make many enhancements based on your suggestions. Please continue to provide your valuable comments by posting on Azure Feedback.

      Viewing all 5264 articles
      Browse latest View live


      <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>