Quantcast
Channel: Category Name
Viewing all 5264 articles
Browse latest View live

Changes in the foreach package

$
0
0

by Hong Ooi, Senior Data Scientist at Microsoft and maintainer of the foreach package

This post is to announce some new and upcoming changes in the foreach package.

First, foreach can now be found on GitHub! The repository is at https://github.com/RevolutionAnalytics/foreach, replacing its old home on R-Forge. Right now the repo hosts both the foreach and iterators packages, but that may change later.

The latest 1.4.8 version of foreach, which is now live on CRAN, adds preliminary support for evaluating %dopar% expressions in a local environment when a sequential backend is used. This addresses a long-standing inconsistency in the behaviour of %dopar% with parallel and sequential backends, where the latter would evaluate the loop body in the global environment by default. This is a common source of bugs: code that works when prototyped with a sequential backend, mysteriously fails with a “real” parallel backend.

From version 1.4.8, the behaviour of %dopar% can be controlled with

options(foreachDoparLocal=TRUE|FALSE)

or equivalently via the system environment variable

R_FOREACH_DOPAR_LOCAL=TRUE|FALSE

with the R option taking its value from the environment variable. The current default value is FALSE, which retains the pre-existing behaviour. It is intended that over time this will be changed to TRUE.

A side-effect of this change is that %do% and %dopar% will (eventually) behave differently for a sequential backend. See this Github issue for more discussion on this topic.

In the background, the repo has also been updated to use modern tooling such as Roxygen, RMarkdown and testthat. None of these should affect how the package works, although there are some minor changes to documentation formats (in particular, the vignettes are now in HTML format rather than PDF).

Some further changes are also planned down the road, to better integrate foreach with the future package by Henrik Bengtsson. See this Github issue for further details.

Please feel free to leave comments, bug reports and pull requests at the foreach repo, or you can contact me directly at hongooi@microsoft.com.


Making our Unity Analyzers Open-Source 

$
0
0

Here at the Visual Studio Tools for Unity team our mission is to improve the productivity of Unity developers. In Visual Studio 2019 we’ve introduced our Unity Analyzers, a collection of Unity specific code diagnostics and code fixes. Today we’re excited to make our Unity Analyzers Open-Source.

Unity Analyzers

Visual Studio and Visual Studio for Mac rely on Roslyn, our compiler infrastructure, to deliver a fantastic C# programming experience. One of my favorite features of Roslyn is the ability to programmatically guide developers when using an API. At the core of this experience, an analyzer detects a code pattern, and can offer to replace it with a more recommended pattern.

A common example that is specific to the Unity API is how you compare tags on your game objects. You could write

collision.gameObject.tag == "enemy";

to compare tags

But Unity offers a CompareTag method that is more efficient, so we implemented a CompareTag diagnostic that will detect this pattern and offer to use the more optimized method instead. On Windows just press (CTRL+.) or press (Alt-Enter) on Visual Studio for Mac to trigger the Quick Fixes, and you’ll be prompted by a preview of the change:

We currently have a dozen analyzers that are shipping in the Tools for Unity, with more being written right now.

Improving the Default Experience

Recently the Roslyn team introduced analyzer suppressors. This feature allows us to programmatically suppress the default set of analyzers that Roslyn ships.

This is great for Unity developers, because it allows the Tools for Unity team to remove warnings or code fix suggestions that do not apply to Unity development.

A common example is for fields decorated with Unity’s SerializeField attributes to light-up the fields in the Unity Inspector. For instance, without the Unity Analyzers, Visual Studio would offer to make a serialized field readonly while we know the Unity engine is setting the value of this field. If you were to accept that code fix, Unity would remove any association you set in the Inspector for this field, which could break things. By writing a suppressor, we can programmatically suppress this behavior while keeping it enabled for standard C# fields.

Available now

Today, the Unity Analyzers are being shipped as part of the Tools for Unity and are enabled on Visual Studio and Visual Studio for Mac. The analyzers are running inside Visual Studio, meaning that if you suppress a warning you might still see it in Unity’s error list. We’re working on improving this for a future release.

Bring your tips and tricks

The Tools for Unity team has a backlog of analyzers, code fixes and suppressors that we’re working on, but we’re always on the lookout for new analyzers that would improve the C# programming experience of Unity developers. The project is easy to get started with. Just head to our README and suggest a new analyzer or even submit a PR to the repository.

See you on GitHub!

The post Making our Unity Analyzers Open-Source  appeared first on Visual Studio Blog.

.NET Framework February 2020 Security and Quality Rollup

$
0
0

Today, we are releasing the February 2020 Security and Quality Rollup Updates for .NET Framework.

Security

The February Security and Quality Rollup Update does not contain any new security fixes. See January 2020 Security and Quality Rollup for the latest security updates.

Quality and Reliability

This release contains the following quality and reliability improvements. Some improvements included in the Security and Quality Rollup and were previously released in the Security and Quality Rollup that was dated January 23, 2020.

Acquistion & Deployment

  • Addresses an issue where the installation of .NET 4.8 on Windows machines prior to 1809 build prevents .NET-specific settings to be migrated during Windows upgrade to build 1809. Note: to prevent this issue, this update must be applied before the upgrade to a newer version of Windows.

CLR1

  • A change in .NET Framework 4.8 regressed certain EnterpriseServices scenarios where an single-thread apartment object may be treated as an multi-thread apartment and lead to a blocking failure. This change now correctly identifies single-thread apartment objects as such and avoids this failure.
  • There is a race condition in the portable PDB metadata provider cache that leaked providers and caused crashes in the diagnostic StackTrace API. To fix the race, detect the cause where the provider wasn’t being disposed and dispose it.
  • Addresses an issue when in Server GC, if you are truly out of memory when doing SOH allocations (ie, there has been a full blocking GC and still no space to accommodate your SOH allocation), you will see full blocking GCs getting triggered over and over again with the trigger reason OutOfSpaceSOH. This fix is to throw OOM when we have detected this situation instead of triggering GCs in a loop.
  • Addresses an issue caused by changing process affinity from 1 to N cores.

Net Libraries

  • Strengthens UdpClient against incorrect usage in network configurations with an exceptionally large MTU.

SQL

  • Addresses an issue with SqlClient Bid traces where information wasn’t being printed due to incorrectly formatted strings.

WCF2

  • There’s a race condition when listening paths are being closed down because of an IIS worker process crash and the same endpoints being reconfigured as listening but pending activation. When a conflict is found, this change allows for retrying with the assumption the conflict was transient due to this race condition. The retry count and wait duration are configurable via app settings.​
  • Added opt-in retry mechanism when configuring listening endpoints on the WCF Activation service to address potential race condition when rapidly restarting an IIS application multiple times while under high CPU load which resulted in an endpoint being inaccessible. Customers can opt in to the fix by adding the following AppSetting to SMSvcHost.exe.config under the %windir%Microsoft.NETFrameworkv4.0.30319 and %windir%Microsoft.NETFramework64v4.0.30319 folders as appropriate. This will retry registering an endpoint 10 times with a 1 second delay between each attempt before placing the endpoint in a failure state.                     < add key=”wcf:SMSvcHost:listenerRegistrationRetryDelayms” value=”1000″>       

Windows Forms

  • Addresses an issue in System.Windows.Forms.TextBox controls with ImeMode property set to NoControl. These controls now retain IME setting consistent with the OS setting regardles of the order of navigation on the page. Fix applies to CHS with pinyin keyboard.
  • Addresses an issue with System.Windows.Forms.ComboBox control with ImeMode set to ImeMode.NoControl on CHS with Pinyin keyboard to retain input mode of the parent container control instead of switching to disabled IME when navigating using mouse clicks and when focus moves from a control with disabled IME to this ComboBox control.
  • An accessibility change in .NET Framework 4.8 regressed editing IP address UI in the DataGridView in Create Cluster Wizard in Failover Cluster Services: users can’t enter the IP value after control UIA tree restructuring related to editing control movement to another editing cell. Such custom DataGridView cells (IP address cell) and their inner controls are currently not processed in default UIA tree restructuring to prevent this issue.

WPF3

  • Addresses an issue where under some circumstances, Popup’s in high-DPI WPF applications are not shown, are shown at the top-left corner of the screen, or are shown/rendered incompletely.
  • Addresses an issue when creating an XPS document in WPF, font subsetting may result in a FileFormatException of the process of subsetting would grow the font.
  • Addresses incorrect width of the text-insertion caret in TextBox et al., when the system DPI exceeds 96. In particular, the caret rendered nothing on a monitor with lower DPI than the primary, in some DPI-aware situations.
  • Addresses a hang arising during layout of Grids with columns belonging to a SharedSizeGroup.
  • Addresses a hang and eventual StackOverflowException arising when opening a RibbonSplitButton, if the app programmatically disables the button and replaces its menu items before the user releases the mouse button.
  • Addresses certain hangs that can arise while scrolling a TreeView.

1 Common Language Runtime (CLR)
2 Windows Communication Foundation (WCF) 3 Windows Presentation Foundation (WPF)

Getting the Update

The Security and Quality Rollup is available via Windows Update, Windows Server Update Services, and Microsoft Update Catalog.

Microsoft Update Catalog

You can get the update via the Microsoft Update Catalog. For Windows 10, NET Framework 4.8 updates are available via Windows Update, Windows Server Update Services, Microsoft Update Catalog. Updates for other versions of .NET Framework are part of the Windows 10 Monthly Cumulative Update.

Note: Customers that rely on Windows Update and Windows Server Update Services will automatically receive the .NET Framework version-specific updates. Advanced system administrators can also take use of the below direct Microsoft Update Catalog download links to .NET Framework-specific updates. Before applying these updates, please ensure that you carefully review the .NET Framework version applicability, to ensure that you only install updates on systems where they apply.

The following table is for Windows 10 and Windows Server 2016+ versions.

Product Version Cumulative Update
Windows 10 1909 and Windows Server, version 1909
.NET Framework 3.5, 4.8 Catalog 4534132
Windows 10 1903 and Windows Server, version 1903
.NET Framework 3.5, 4.8 Catalog 4534132
Windows 10 1809 (October 2018 Update) and Windows Server 2019 4538122
.NET Framework 3.5, 4.7.2 Catalog 4534119
.NET Framework 3.5, 4.8 Catalog 4534131
Windows 10 1803 (April 2018 Update)
.NET Framework 3.5, 4.7.2 Catalog 4537762
.NET Framework 4.8 Catalog 4534130
Windows 10 1709 (Fall Creators Update)
.NET Framework 3.5, 4.7.1, 4.7.2 Catalog 4537789
.NET Framework 4.8 Catalog 4534129
Windows 10 1703 (Creators Update)
.NET Framework 3.5, 4.7, 4.7.1, 4.7.2 Catalog 4537765
.NET Framework 4.8 Catalog 4537557
Windows 10 1607 (Anniversary Update) and Windows Server 2016
.NET Framework 3.5, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4537764
.NET Framework 4.8 Catalog 4534126
Windows 10 1507
.NET Framework 3.5, 4.6, 4.6.1, 4.6.2 Catalog 4537776

The following table is for earlier Windows and Windows Server versions.

Product Version Security and Quality Rollup
Windows 8.1, Windows RT 8.1 and Windows Server 2012 R2 4538124
.NET Framework 3.5 Catalog 4532946
.NET Framework 4.5.2 Catalog 4534120
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4534117
.NET Framework 4.8 Catalog 4534134
Windows Server 2012 4538123
.NET Framework 3.5 Catalog 4532943
.NET Framework 4.5.2 Catalog 4534121
.NET Framework 4.6, 4.6.1, 4.6.2, 4.7, 4.7.1, 4.7.2 Catalog 4534116
.NET Framework 4.8 Catalog 4534133

Previous Monthly Rollups

The last few .NET Framework Monthly updates are listed below for your convenience:

The post .NET Framework February 2020 Security and Quality Rollup appeared first on .NET Blog.

Python in Visual Studio Code – February 2020 Release

$
0
0

 

We are happy to announce that the February 2020 release of the Python Extension for Visual Studio Code is now available. You can download the Python extension from the Marketplace, or install it directly from the extension gallery in Visual Studio Code. If you already have the Python extension installed, you can also get the latest update by restarting Visual Studio Code or updating it directly in the Extensions view. You can learn more about  Python support in Visual Studio Code in the documentation.

In this release we made improvements that are listed in our changelog, closing a total of 66 issues, including a much faster startup of Jupyter Notebook editor and scaling back of configuration notifications. Keep on reading to learn more!

Jupyter Notebook editor starts up faster

In the January release of the Python extension, we made tremendous improvements towards the performance of the Notebook editor. In this release, we continued that effort to take it even further. In our testing benchmarks, we see an additional 2-3X improvement in speed when starting up the Jupyter server and when opening the Notebook editor. First cell execution is also faster as the Jupyter server now spins up in the background automatically when notebooks are opened.

Scaling Back of Configuration Notifications

Another feedback we often receive is that when opening a workspace that is already configured for Visual Studio Code without having an interpreter selected, the Python extension was throwing a lot of notifications for installation of tools. Previously, the installation would fail because no interpreter was selected in the workspace.

Screenshot of three notification prompts: one for interpreter selection and two for tools installation.

In this release, we scaled back the notification prompts for tools installation. They are now only displayed if an interpreter is selected.

Screenshot of a single notification prompt for interpreter selection.

In case you missed it: Jump to Cursor

Although it’s not part of the new improvements included in this release, the Python debugger supports a feature that doesn’t seem to be widely known: Jump to Cursor.

When you start a debug session and the debugger hits a breakpoint, you can right click on any part of your code – before or after the point where the breakpoint was hit, and select “Jump to Cursor”. This will make the debugger continue its execution from that selected line onward:

Image Feb20 JumpToCursor8

So if you want to execute pieces of code that the debugger had already passed through, you don’t need to restart the debug session and wait for the execution to reach that point again. You can simply set it to jump to the line you wish to execute.

Call for action!

We’d love to hear your feedback! Did you know about this feature before this blog post? Do you think its name can be improved to better indicate its behaviour? Let us know on the following GitHub issue: https://github.com/microsoft/vscode-python/issues/9947.

Other Changes and Enhancements

In this release we have also added small enhancements and fixed issues requested by users that should improve your experience working with Python in Visual Studio Code. Some notable changes include:

  • Automatically start the Jupyter server when opening a notebook or the interative window. (#7232)
  • Don’t display output panel when building workspace symbols. (#9603)
  • Fix to a crash when using pytest to discover doctests with unknown line number. (thanks Olivier Grisel) (#7487)
  • Update Chinese (Traditional) translation. (thanks pan93412) (#9548)

We’re constantly A/B testing new features. If you see something different that was not announced by the team, you may be part of the experiment! To see if you are part of an experiment, you can check the first lines in the Python extension output channel. If you wish to opt-out of A/B testing, you can open the user settings.json file (View > Command Palette… and run Preferences: Open Settings (JSON)) and set the “python.experiments.optOutFrom” setting to [“All”], or to specific experiments you wish to opt out from.

Be sure to download the Python extension for Visual Studio Code now to try out the features above. If you run into any problems, please file an issue on the Python VS Code GitHub page.

 

 

The post Python in Visual Studio Code – February 2020 Release appeared first on Python.

Announcing .NET Interactive – Try .NET includes .NET Notebooks and more

$
0
0

At Microsoft Ignite 2019, we were happy to announce that the "Try .NET global tool" added support for C# and F# Jupyter notebooks. Last week, the same team that brought you .NET Notebooks announced Preview 2 of the .NET Notebook.

Name Change - .NET interactive

As the scenarios for what was "Try .NET" continued to grow, the team wanted to a name that encompassed all the experiences they have as well as all the experiences they will have in the future. What was the Try .NET family of projects is now .NET interactive.

The F# community has enjoyed F# in Juypter Notebooks from years with the pioneering functional work of Rick Minerich, Colin Gravill and many other contributors! .NET Interactive is a family of tools and kernels that offer support across a variety of experiences as a 1st party Microsoft-supported offering.

.NET interactive is a group of CLI (command line interface) tools and APIs that enable users to create interactive experiences across the web, markdown, and notebooks.

.NET Interactive APIs and Tools

Here is what the command line looks like using the dotnet CLI.

  • dotnet interactive global tool:
  • dotnet try global tool:
    • Used for workshops and offline documentation. Interactive markdown with a backing project. I wrote about this in May 2019.
  • trydotnet.js API
    • Currently, only used internally at Microsoft, this API is used on the .NET page and C# documentation. Maybe one day I can use it on my blog? And yours?

Installing .NET Interactive

You can start playing with it today, locally or in the cloud! Seriously. Just click and start using it.

Before you install the .NET interactive global tool, please make sure you have the following:

> jupyter kernelspec list
  python3        ~jupyterkernelspython3
  • Open Windows terminal and install the dotnet interactive global tool:
> dotnet tool install --global Microsoft.dotnet-interactive
  • Switch back to Anaconda prompt and install the .NET kernel. To be clear, here we are using the dotnet CLI to let the Jupyter CLI know that we exist!
> dotnet interactive jupyter install
[InstallKernelSpec] Installed kernelspec .net-csharp in ~jupyterkernels.net-csharp
.NET kernel installation succeeded
[InstallKernelSpec] Installed kernelspec .net-fsharp in ~jupyterkernels.net-fsharp
.NET kernel installation succeeded
[InstallKernelSpec] Installed kernelspec .net-powershell in ~jupyterkernels.net-powershell
.NET kernel installation succeeded
  • While still in Anaconda prompt, verify that .NET kernel is installed like this
> jupyter kernelspec list
  .net-csharp     ~jupyterkernels.net-csharp
  .net-fsharp     ~jupyterkernels.net-fsharp
  .net-powershell ~jupyterkernels.net-powershell
  python3         ~jupyterkernelspython3

Now you can just run "jupyter lab" at the command line and you're ready to go!

More Languages - PowerShell

The .NET kernel now comes PowerShell support too! In Preview 2, the .NET interactive team partnered with PowerShell to enable this scenario. You can read more about the announcement of the PowerShell blog.

.NET in Jupyter Notebooks

The .NET interactive team is looking forward to hearing your thoughts. You can talk to them at https://github.com/dotnet/interactive

Multi .NET language Notebooks

I wanted to highlight one of the hidden gems .NET interactive has had since Preview 1 - multi-language notebooks. That means that users can switch languages in a single notebook. Here is an example of a C#, F#, and PowerShell in a single .ipynb file.

Multiple Language Notebooks

Using one of the language magic commands (#!csharp, #!fsharp,#pwsh) tells the .NET Interactive kernel to run the cell in a specific language. To see a complete list of the available magic commands, enter the #!lsmagic command into a new cell and run it.

.NET Code in nteract.io

Additionally, you can now write .NET Code in nteract.io. Nteract is an open-source organization that builds SDKs, applications, and libraries that helps people make the most of interactive notebooks and REPLs. We are excited to have our .NET users take advantage of the rich REPL experience nteract provides, including the nteract desktop app.

Charts and graphs in nteract

To get started with .NET Interactive in nteract please download the nteract desktop app and install the .NET kernels.

Learn More

The team is looking forward to seeing what you build. Moving forward, the team has split dotnet try and dotnet interactive tools into separate repos.

  • For any issues, feature requests, and contributions to .NET Notebooks, please visit the .NET Interactive repo.
  • For any issues, feature requests, and contributions on interactive markdown and trydotnet.js, please visit the Try .NET repo.

Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.



© 2019 Scott Hanselman. All rights reserved.
     

Creating .NET Core global tools on macOS

$
0
0

One of the really cool aspects about .NET Core is the support for global tools. You can use global tools to simplify common tasks during your development workflow. For example, you can create tools to minify image assets, simplify working with source control, or perform any other task that you can automate with the command line. After developing your tool, you can distribute it on NuGet.org, or any other NuGet repository, to share the tool with others. Since .NET Core is cross platform, your global tools will also work cross platform, assuming your code doesn’t contain any platform specific code. You can find existing global tools here. You can also create local tools, those that are associated with specific projects and not available globally. For more info on local tools see the .NET Core Tools — local installation section in Announcing .NET Core 3.0.

In this post we will discuss how you can create global tools when developing on macOS as well as how to prepare them to distribute using NuGet. Let’s get started with our first global tool. Today, we will be using Visual Studio for Mac, but you can follow similar steps if you are using a different IDE or editor. To ensure you have everything you need to follow this tutorial, download Visual Studio for Mac. The code we will be reviewing in this post is available on GitHub, a link is at the end of this post.

Hello World

Let’s create a very basic global tool that will print “Hello World” to the user. To create our tool, we will work through the following steps:

  1. Create the project
  2. Modify the project file to make it a global tool
  3. Implement our code
  4. Test our new global tool

The first thing you’ll want to do when creating a global tool is to create a project. Since global tools are console applications, we will use the console project template to get started. After launching Visual Studio for Mac you’ll see the dialog below, click New to begin creating the project. If you already have Visual Studio open, you could also use the ⇧⌘N shortcut to open the new project dialog.

Image vsmac new project

 

From here we will create a .NET Core Console project by going to .NET Core > App > Console Application.

visual studio for mac new console project

After selecting Console Application, click Next to select the version of .NET Core. I have selected .NET Core 3.1. Click Next after selecting that, and then provide the name and location for the project. I have named the project HelloTool.

 

Customize the project for NuGet

Now that we’ve created the project, the first thing to do is to customize the project file to add properties that will make this a global tool. To edit the project file, right click on the project in the Solution Pad and select Tools > Edit File. This is demonstrated in the following image.

visual studio for mac menu option to edit the project file

Note: the menu option to edit the project file is moving to the top level in the context menu as Edit Project File soon.

The .csproj file, an MSBuild file that defines the project, will be opened in the editor. To make the project into a global tool, we must enable the project to be packed into a NuGet package. You can do this by adding a property named PackAsTool and setting it to true in the .csproj file. If you are planning to publish the package to NuGet.org you will also want to specify some additional properties that NuGet.org will surface to users. You can see the full list of NuGet related properties that can be set over at NuGet metadata properties. Let’s look at the properties I typically set when creating a global tool. I’ve pasted the .csproj file below.

<Project Sdk="Microsoft.NET.Sdk">
  <PropertyGroup>
    <OutputType>Exe</OutputType>
    <TargetFramework>netcoreapp3.1</TargetFramework>
        
    <!-- global tool related properties -->
    <PackAsTool>true</PackAsTool>
    <ToolCommandName>hellotool</ToolCommandName>
    <PackageOutputPath>./nupkg</PackageOutputPath>
    
    <!-- nuget related properties -->
    <Authors>Sayed Ibrahim Hashimi</Authors>
    <Description>My hello world global tool</Description>
    <Version>1.0.0</Version>
    <Copyright>Copyright 2020 © Sayed Ibrahim Hashimi. All rights reserved.</Copyright>
    <PackageLicenseExpression>Apache-2.0</PackageLicenseExpression>
    <RepositoryUrl>https://github.com/sayedihashimi/global-tool-sample</RepositoryUrl>
    <RepositoryType>git</RepositoryType>
    <PackageType>DotNetCliTool</PackageType>
  </PropertyGroup>
</Project>

There are two sections of properties that I have added here. Below you’ll find a description of each of these properties.

Property Name Description
PackAsTool Set this to true for all global tools, this will enable packing the project into a NuGet package.
ToolCommandName Optional name for the tool.
PackageOutputPath Path to where the .nupkg file should be placed.
Authors Name of the author(s) of the project.
Description Description that will be shown in nuget.org and other places.
Version Version of the NuGet package. For each release to nuget.org this must be unique.
Copyright Copyright declaration.
PackageLicenseExpression An SPDX license identifier or expression.
RepositoryUrl Specifies the URL for the repository where the source code for the package resides and/or from which it’s being built.
RepositoryType Repository type. Examples: git, tfs.
PackageType For tools specify this as DotNetCliTool.

 

It’s a good idea to specify these properties now, so that you can focus on the code for the global tool. If you’re just creating a tool to play around, or for personal use, I recommend just setting PackAsToolToolCommandName and PackageOutputPath. Now let’s take a closer look at the code.

In the Program.cs file you’ll find that the following code was added when we created the project.

using System;

namespace HelloWorld {
    class Program {
        static void Main(string[] args) {
            Console.WriteLine("Hello World!");
       }
    }
}

Since the code is already printing “Hello World!”, we can use this as is with no modifications for now. Let’s move on to try executing this as a global tool at the command line. We will first need to package this as a NuGet package.

Pack and Test the tool

To create a NuGet package from this project you can use the built in Pack command offered by Visual Studio for Mac. To get there, right-click your project and then select Pack as seen in the next image.

visual studio for mac pack menu option

After you invoke the Pack command, the NuGet package (.nupkg file) will be created in the directory we specified in the PackageOutputPath property. In our case it will go into a folder named nupkg in the project folder. Now that we have created a NuGet package from the project, we will register the global tool and try it from the command line.

To install and test the global tool, first open the Terminal app, or your favorite alternative. You’ll want to change directory into the project directory and run the commands from there. You will need to register the package as a tool using the following command.

dotnet tool install --global --add-source ./nupkg HelloTool

Here we are calling dotnet tool with the install command to install the tool. By passing –global, the tool will be available from any folder on your machine. We passed –add-source with the location to the folder where the .nupkg file is located so that our new tool can be located and installed. After executing this command, you should see output like the following:

You can invoke the tool using the following command: hellotool

Tool 'hellotool' (version '1.0.0') was successfully installed.

Let’s try to invoke the tool with hellotool to see if it’s working.

output from hellotool

If you run into a command not found error, you may need to modify your PATH variable. You should ensure that the full path to ~/.dotnet/tools is include in the PATH variable. By full path, I mean the ~ should be expanded, for example /Users/sayedhashimi/.dotnet/tools in my case. Now that we have seen how to get started with a tool, let’s do something more interesting by adding some code to the project.

Adding parameters using DragonFruit

To make this more realistic we want to add some features like; adding support for parameters, displaying help and more. We could implement all of this directly by using System.CommandLine, but the .NET Core team is working on a layer that will simplify it for us called DragonFruit. We will use the DragonFruit to help us create this command quickly.

Note: DragonFruit is currently an experimental app model for System.CommandLine. This information is subject to change as it is being developed.

Now we want to add a couple of parameters to the app to make it more realistic. Before we do that, let’s first add the DragonFruit NuGet package to the project and then go from there. To add the NuGet package right click on your app and select Manage NuGet Packages.

visual studio for mac manage nuget packages menu option

When the Manage NuGet Packages dialog appears, first check the checkbox to Show pre-release packages in the lower left, and the search for System.CommandLine.DragonFruit. After that click on the Add Package button to add the package to your project. See the following image.

visual studio for mac add dragonfruit nuget package

Now that we have added the package, we are ready to add some parameters to the global tool. With DragonFruit it’s really easy to add parameters to your tools, you just declare the parameters as arguments in the main method itself. Let’s add a name and age parameter to this global tool. The updated code for Program.cs is shown below.

using System;

namespace HelloTool {
    class Program {
        static void Main(string name = "World", int age = 0) {
            string message = age <= 0 ? $"Hello there {name}!" : $"Hello there {name}, who is {age} years old";
            Console.WriteLine(message);

        }
    }
}

In the code above we have added the parameters as arguments in the Main method, and then we craft a new message using those values. Now we want to test that the changes that we have made are working correctly before making further changes. If you want to just run the app you can use Run > Start without Debugging, or Run > Start Debugging from the menu bar to run it as a vanilla console app. What we want to do is to test it as a .NET Core global tool as well. To do that we will follow the steps below.

  1. Pack the project in Visual Studio for Mac
  2. Uninstall the global tool
  3. Install the global tool
  4. Run the tool

Since we will need to install/uninstall the tool often, we can simplify that by creating a new Custom Tool in Visual Studio for Mac to facilitate this. To get started go to Tools > Add Custom Tool.

visual studio for mac add custom tool menu option

This will bring up a new dialog where we can create the two custom tools to handle install/uninstall. To start, click the Add button and then configure each tool.

visual studio for mac custom tool add button

We want to create two tools with the following values:

Install Tool

  • Title = Install project global tool
  • Command = dotnet
  • Arguments = tool install –global –add-source ./nupkg ${ProjectName}
  • Working directory = ${ProjectDir}

Uninstall Tool

  • Title = Install project global tool
  • Command = dotnet
  • Arguments = tool uninstall –global ${ProjectName}
  • Working directory = ${ProjectDir}

The Uninstall tool, for example, should look like the following:

visual studio for mac custom tool uninstall sample

After adding these tools you’ll see them appear in the Tools menu as shown below.

visual studio for mac tools menu with custom tools

To invoke these newly added tools, you can simply click on the command. Since we authored these tools using the parameter ${ProjectName} these commands should work on your other global tool projects assuming the tool name is the same as the project name. Let’s try the uninstall command. Take a look at the experience in the animate gif below, which shows the tool being invoked and the output being displayed in the Application Output Pad.

gif showing visual studio mac pack and install via custom tool

We can see that the tool was successfully installed. Now we can go back to the terminal to test the global tool itself. Go back to the terminal and execute hellotool, and verify that you see the message Hello there World!

output from running hellotool

The drawback to this approach is that you have to perform three separate steps in the IDE; pack, uninstall and install. You can simplify this by modifying the project file, the .csproj file. Add the following target to your .csproj file immediately before </Project>.

<Target Name="InstallTool" DependsOnTargets="Pack">
    <Exec Command="dotnet tool uninstall --global $(ToolCommandName)" IgnoreExitCode="true"/>
    <Exec Command="dotnet tool install --global --add-source $(PackageOutputPath) $(ToolCommandName)"/>
    <Exec Command="$(ToolCommandName) --help" />
</Target>

This is an MSBuld target that we can call to take care of all three steps for us. It will also call the tool to display its help output after it’s installed. After adding this target to your .cspoj file, you can execute it with dotnet build -t:InstallTool. In Visual Studio for Mac you can create a new Custom Tool with the following properties to invoke this target.

  • Title = Install tool
  • Command = dotnet
  • Arguments = -t:InstallTool
  • Working directory = ${ProjectDir}

Then you can invoke this new custom tool instead of the three steps we outlined. Since it’s not always feasible to edit the project file, this doc will continue using the previous approach.

Help output

Now let’s take a look at the default help output that we get when the DragonFruit package is in the project. Let’s execute hellotool -h, the output is shown below.

help output from hellotool

With the default help output, the names of the parameters are shown as the description. This is helpful, but not ideal. Let’s improve it. To do that all we need to do is to add some /// comments to the main method, with the descriptions. The updated code is shown in the following code block.

using System;

namespace HelloTool {
    class Program {
        /// <summary>
        /// A simple global tool with parameters.
        /// </summary>
        /// <param name="name">Your name (required)</param>
        /// <param name="age">Your age</param>
        static void Main(string name = "World", int age = 0) {
            string message = age <= 0 ? $"Hello there {name}!" : $"Hello there {name}, who is {age} years old";
            Console.WriteLine(message);
        }
    }
}

All we have done is add some descriptive comments to the Main method for each parameter. DragonFruit will take care of wiring up for us. Now let’s go through the flow of pack, install, uninstall and test one more time. After going through that when you invoke hellotool -h the output should be as shown below. If you are still seeing the old output, ensure you’ve used the Pack command for the project before install.

hellotool help output

Now we can see that the help output contains some descriptive text. This is looking much better now! Let’s invoke the tool and pass in some parameters. Let’s invoke hellotool –name dotnet-bot –age 5 and examine the output.

hellotool output

It looks like the tool is behaving as expected. From here you can continue developing your command line tool and then publish it to NuGet.org, or another NuGet repository, to share it with others. Since we have already configured the NuGet properties in the project we can upload the .nupkg that is created after invoking the Pack menu option. After you have published the NuGet package, users can install it with the following command.

dotnet tool install --global <packagename>

This will download the package from the NuGet repository and then install the tool globally for that user. The uninstall command that users will use is the same as what you’ve been using during development. When you make changes to your tool and republish to NuGet.org, remember to change the version number in the .csproj file. Each package published to a NuGet repository needs to have a unique version for that package.

Summary & Wrap Up

In this post we covered a lot of material on how to create a .NET Core global tool. If you’d like to learn more about creating global tools, take a look at the additional resources section below. If you have any questions or feedback, please leave a comment on this post.

Additional Resources

Join us for our upcoming Visual Studio for Mac: Refresh() event on February 24 for deep dive sessions into .NET development using Visual Studio for Mac, including a full session on developing Blazor applications.

Make sure to follow us on Twitter at @VisualStudioMac and reach out to the team. Customer feedback is important to us and we would love to hear your thoughts. Alternatively, you can head over to Visual Studio Developer Community to track your issues, suggest a feature, ask questions, and find answers from others. We use your feedback to continue to improve Visual Studio 2019 for Mac, so thank you again on behalf of our entire team.

Documentation links

The post Creating .NET Core global tools on macOS appeared first on Visual Studio Blog.

Announcing the preview of Azure Shared Disks for clustered applications

$
0
0

Today, we are announcing the limited preview of Azure Shared Disks, the industry’s first shared cloud block storage. Azure Shared Disks enables the next wave of block storage workloads migrating to the cloud including the most demanding enterprise applications, currently running on-premises on Storage Area Networks (SANs). These include clustered databases, parallel file systems, persistent containers, and machine learning applications. This unique capability enables customers to run latency-sensitive workloads, without compromising on well-known deployment patterns for fast failover and high availability. This includes applications built for Windows or Linux-based clustered filesystems like Global File System 2 (GFS2).

With Azure Shared Disks, customers now have the flexibility to migrate clustered environments running on Windows Server, including Windows Server 2008 (which has reached End-of-Support), to Azure. This capability is designed to support SQL Server Failover Cluster Instances (FCI), Scale-out File Servers (SoFS), Remote Desktop Servers (RDS), and SAP ASCS/SCS running on Windows Server.

We encourage you to get started and request access by filling out this form.

Leveraging Azure Shared Disks

Azure Shared Disks provides a consistent experience for applications running on clustered environments today. This means that any application that currently leverages SCSI Persistent Reservations (PR) can use this well-known set of commands to register nodes in the cluster to the disk. The application can then choose from a range of supported access modes for one or more nodes to read or write to the disk. These applications can deploy in highly available configurations while also leveraging Azure Disk durability guarantees.

The below diagram illustrates a sample two-node clustered database application orchestrating failover from one node to the other.
   2-node failover cluster
The flow is as follows:

  1. The clustered application running on both Azure VM 1 and  Azure VM 2 registers the intent to read or write to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk.
  3. This reservation is enforced on Azure Disk and the database can now exclusively write to the disk. Any writes from the application instance on Azure VM 2 will not succeed.
  4. If the application instance on Azure VM 1 goes down, the instance on Azure VM 2 can now initiate a database failover and take-over of the disk.
  5. This reservation is now enforced on the Azure Disk, and it will no longer accept writes from the application on Azure VM 1. It will now only accept writes from the application on Azure VM 2.
  6. The clustered application can complete the database failover and serve requests from Azure VM 2.

The below diagram illustrates another common workload consists of multiple nodes reading data from the disk to run parallel jobs, for example, training of Machine Learning models.
   n-node cluster with multiple readers
The flow is as follows:

  1. The application registers all Virtual Machines registers to the disk.
  2. The application instance on Azure VM 1 then takes an exclusive reservation to write to the disk while opening up reads from other Virtual Machines.
  3. This reservation is enforced on Azure Disk.
  4. All nodes in the cluster can now read from the disk. Only one node writes results back to the disk on behalf of all the nodes in the cluster.

Disk types, sizes, and pricing

Azure Shared Disks are available on Premium SSDs and supports disk sizes including and greater than P15 (i.e. 256 GB). Support for Azure Ultra Disk will be available soon. Azure Shared Disks can be enabled as data disks only (not OS Disks). Each additional mount to an Azure Shared Disk (Premium SSDs) will be charged based on disk size. Please refer to the Azure Disks pricing page for details on limited preview pricing.

Azure Shared Disks vs Azure Files

Azure Shared Disks provides shared access to block storage which can be leveraged by multiple virtual machines. You will need to use a common Windows and Linux-based cluster manager like Windows Server Failover Cluster (WSFC), Pacemaker, or Corosync for node-to-node communication and to enable write locking. If you are looking for a fully-managed files service on Azure that can be accessed using Server Message Block (SMB) or Network File System (NFS) protocol, check out Azure Premium Files or Azure NetApp Files.

Getting started

You can create Azure Shared Disks using Azure Resource Manager templates. For details on how to get started and use Azure Shared Disks in preview, please refer to the documentation page. For updates on regional availability and Ultra Disk availability, please refer to the Azure Disks FAQ. Here is a video of Mark Russinovich from Microsoft Ignite 2019 covering Azure Shared Disks.

In the coming weeks, we will be enabling Portal and SDK support. Support for Azure Backup and  Azure Site Recovery is currently not available. Refer to the Managed Disks documentation for detailed instructions on all disk operations.

If you are interested in participating in the preview, you can now get started by requesting access.

SQL Server runs best on Azure. Here’s why.

$
0
0

SQL Server customers migrating their databases to the cloud have multiple choices for their cloud destination. To thoroughly assess which cloud is best for SQL Server workloads, two key factors to consider are:

  1. Innovations that the cloud provider can uniquely provide.
  2. Independent benchmark results.

What innovations can the cloud provider bring to your SQL Server workloads?

As you consider your options for running SQL Server in the cloud, it's important to understand what the cloud provider can offer both today and tomorrow. Can they provide you with the capabilities to maximize the performance of your modern applications? Can they automatically protect you against vulnerabilities and ensure availability for your mission-critical workloads?

SQL Server customers benefit from our continued expertise developed over the past 25 years, delivering performance, security, and innovation. This includes deploying SQL Server on Azure, where we provide customers with innovations that aren’t available anywhere else. One great example of this is Azure BlobCache, which provides fast, free reads for customers. This feature alone provides tremendous value to our customers that is simply unmatched in the market today.

Additionally, we offer preconfigured, built-in security and management capabilities that automate tasks like patching, high availability, and backups. Azure also offers advanced data security that enables both vulnerability assessments and advanced threat protection. Customers benefit from all of these capabilities both when using our Azure Marketplace images and when self-installing SQL Server on Azure virtual machines.

Only Azure offers these innovations.

What are their performance results on independent, industry-standard benchmarks?

Benchmarks can often be useful tools for assessing your cloud options. It's important, though, to ask if those benchmarks were conducted by independent third parties and whether they used today’s industry-standard methods.

bar graphs comparing the prefromance and price differences between Azure and AWS.

The images above show performance and price-performance comparisons from the February 2020 GigaOm performance benchmark blog post

In December, an independent study by GigaOm compared SQL Server on Azure Virtual Machines to AWS EC2 using a field test derived from the industry standard TPC-E benchmark. GigaOm found Azure was up to 3.4x faster and 87 percent cheaper than AWS. Today, we are pleased to announce that in GigaOm’s second benchmark analysis, using the latest virtual machine comparisons and disk striping, Azure was up to 3.6x faster and 84 percent cheaper than AWS.1 

These results continue to demonstrate that SQL Server runs best on Azure.

Get started today

Learn more about how you can start taking advantage of these benefits today with SQL Server on Azure.

 


1Price-performance claims based on data from a study commissioned by Microsoft and conducted by GigaOm in February 2020. The study compared price performance between SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in Azure E32as_v4 instance type with P30 Premium SSD Disks and the SQL Server 2019 Enterprise Edition on Windows Server 2019 Datacenter edition in AWS EC2 r5a.8xlarge instance type with General Purpose (gp2) volumes. Benchmark data is taken from a GigaOm Analytic Field Test derived from a recognized industry standard, TPC Benchmark™ E (TPC-E). The Field Test does not implement the full TPC-E benchmark and as such is not comparable to any published TPC-E benchmarks. Prices are based on publicly available US pricing in West US for SQL Server on Azure Virtual Machines and Northern California for AWS EC2 as of January 2020. The pricing incorporates three-year reservations for Azure and AWS compute pricing, and Azure Hybrid Benefit for SQL Server and Azure Hybrid Benefit for Windows Server and License Mobility for SQL Server in AWS, excluding Software Assurance costs. Actual results and prices may vary based on configuration and region.


Decompilation of C# code made easy with Visual Studio

$
0
0

Have you ever found yourself debugging a .NET project or memory dump only to be confronted with a No Symbols Loaded page? Or maybe experienced an exception occurring in a 3rd party .NET assembly but had no source code to figure out why? You can now use Visual Studio to decompile managed code even if you don’t have the symbols, allowing you to look at code, inspect variables and set breakpoints.

We have recently released a new decompilation and symbol creation experience in the latest preview of Visual Studio 2019 version 16.5 that will aid debugging in situations where you might be missing symbol files or source code. As we launch this feature, we want to ensure that we are creating the most intuitive workflows so please provide feedback.

Decompilation and PDB generation with ILSpy

Decompilation is the process used to produce source code from compiled code. In order to accomplish this we are partnering with ILSpy, a popular open source project, which provides first class, cross platform symbol generation and decompliation. Our engineering team is working to integrate ILSpy technology into valuable debugging scenarios.

What are symbol files? Why do you need them?

Symbol files represent a record of how the compiler translates your source code into Common Intermediate Language (CIL), the CIL is compiled by the Common Language Runtime and executed by the processor. The .NET compiler symbol files are represented by program database files (.pdb), and these files are created as part of the build. The symbol file maps statements in the source code to the CIL instructions in the executable.

Debuggers are able to use the information in the symbol file to determine the source file and line number that should be displayed, and the location in the executable to stop at when you set a breakpoint. Debugging without a symbol file would make it difficult to set breakpoints on a specific line of code or even step through code.

Visual Studio currently provides the option to debug code outside your project source code, such as .NET or third-party code your project calls by specifying the location of the .pdb (and optionally, the source files of the external code). However, in many cases finding the correct symbol files or source code may not be feasible.

By integrating decompilation directly into your debugging experiences we hope to provide developers with the most direct route to troubleshooting issues in 3rd party managed code. We are initially integrating the decompilation experiences into the Module Window, No Symbols Loaded, and Source Not Found page.

No Symbols Loaded/Source Not Found

There are a several ways in which Visual Studio will try to step into code for which it does not have symbols or source files available:

  • Break into code from a breakpoint or exception.
  • Step into code.
  • Switch to a different thread.
  • Change the stack frame by double-clicking a frame in the Call Stack window.

Under these circumstances, the debugger displays the No Symbols Loaded or Source Not Found page and provides an opportunity to load the necessary symbols or source.

In the following example I have opened a crash dump in Visual Studio and have hit an exception in framework code. I do not have the original source code so If I try to switch to the main thread, I see the No Symbols Loaded page. However, it is now possible to decompile the code directly on this page and see the origins of the exception.

Image vs decompilation no symbols loaded

Module Window

During debugging the Modules window is a great place to get information related to the assemblies and executables currently in memory. To open the Modules window, select Debug > Windows > Modules.

Once you have identified a module that requires decompilation, you can right-click on the module and select “Decompile Source to Symbol File”. This action creates a symbol file containing decompiled source which in turn permits you to step into 3rd party code directly from your source code.

It will also be possible to extract source code to disk by right clicking on a module with embedded source and clicking “Extract Embedded Source”. This process exports source files to a Miscellaneous files folder for further analysis. In the following example I open an extracted .cs file and set a break point to better understand the 3rd party code I am using.

Shows decompilation and source extraction from Visual Studio Module window

Some Considerations

Decompilation of the CIL format, used in .NET assemblies, back into a higher-level language like C# has some inherent limitations:

  • Decompiled source does not always resemble the original source code. Decompilation is best used to understand how the program is executing and not as a replacement for the original source code.
  • Debugging code that was decompiled from an assembly that was built using compiler optimizations may encounter the following issues:
    • Breakpoints not always binding to the matching sourcing location
    • Stepping may not always step to the correction
    • Async/await and yield state-machines may not be fully resolved
    • Local variables may not have accurate names
    • Some variables may not be able to be evaluated in the IL Stacks is not empty
  • Source code extracted from an assembly are placed in the solution as Miscellaneous file:
    • The name and location of the generated files is not configurable.
    • They are temporary and will be deleted by Visual Studio.
    • Files are placed in a single folder and any folder hierarchy that the original sources had is not used.
    • The file name for each file has a checksum hash of the file.
  • Decompilation of optimized or release modules produces non-user code. If the debugger breaks in your decompiled non-user code, for example, the No Source window will appear. In order to disable Just My Code navigate to Tools > Options (or Debug > Options) > Debugging > General, deselect Enable Just My Code.
  • Decompilation will only generate source code files in C#.

Try it now!

Download the preview and try out decompilation and let us how it works for you! Please reach out and give us feedback over at Developer Community. Finally, we also have a survey for collecting feedback on the new experiences here. We look forward to hearing from you.

The post Decompilation of C# code made easy with Visual Studio appeared first on Visual Studio Blog.

New optimizations boost performance in preview builds of Microsoft Edge

$
0
0

Starting with Microsoft Edge build 81.0.389.0 on 64-bit Windows 10, we’ve enabled new toolchain optimizations that should provide a substantial performance improvement in general browsing workloads.

We’ve measured an up to 13% performance improvement in the Speedometer 2.0 benchmark when compared to Microsoft Edge 79. Speedometer measures performance by simulating user interactions in a sample web app across a number of DOM APIs and popular JavaScript frameworks used by top sites, and is generally regarded as a good proxy for real-world performance across a number of different subsystems including the DOM, JavaScript engine, layout, and more.

We’d like your help validating these improvements in your real-world browsing as we approach our next Beta release later this month. You can try out these improvements by comparing performance in the latest Dev or Canary builds to Microsoft Edge 80 or earlier.

The details:

We measured Speedometer 2.0 in ten consecutive runs on Microsoft Edge 79, where the optimizations are not yet implemented.  The results are below.

Microsoft Edge
v. 79.0.309.71
1 84.6
2 85.4
3 85.3
4 85.3
5 84.6
6 84.9
7 85.8
8 84.7
9 84.8
10 84.3
Median 84.85

Benchmarked on Windows 10 1909 (OS Build 18363.592) on a Microsoft Surface Pro 5 (Intel(R) i5-8250U CPU 1.60GHz and 8 GB RAM), with no other applications running and no additional browser tabs open.

We then ran Speedometer 2.0 on recent versions of Microsoft Edge 81 which include the new optimizations, with the following results.

Microsoft Edge
v. 81.0.410.0
Microsoft Edge
v. 81.0.403.1
1 96.3 96.7
2 91.1 95.7
3 91.7 95.2
4 96 95.5
5 97.6 95.5
6 97.4 95.9
7 96.8 96.2
8 94.4 96.2
9 96.4 95.5
10 94.4 95.4
Median 96.15 95.6

Benchmarked on Windows 10 1909 (OS Build 18363.592) on a Microsoft Surface Pro 5 (Intel(R) i5-8250U CPU 1.60GHz and 8 GB RAM), with no other applications running and no additional browser tabs open.

We would love for you to try the new optimizations in Dev or Canary and let us know if you notice these improvements in  your real-world experience. Please join us on the Microsoft Edge Insider forums or Twitter to discuss your experience and let us know what you think! We hope you enjoy the changes and look forward to your feedback!

The post New optimizations boost performance in preview builds of Microsoft Edge appeared first on Microsoft Edge Blog.

It’s time for you to install Windows Terminal

$
0
0

It's time. It's the feature complete release of the Windows Terminal. Stop reading, and go install it. I'll wait here. You done? OK.

You can download the Windows Terminal from the Microsoft Store or from the GitHub releases page. There's also an unofficial Chocolatey release. I recommend the Store version if possible.

NOTE: Have you already downloaded the Terminal, maybe a while back? Enough has changed that you should delete your profiles.json and start over.

BIG NOTE: Educate yourself about the difference between a console, a terminal, and a shell. This isn't a new "DOS Prompt." Windows Terminal is the view into whatever shell makes you happy.

What's new? A lot. At this point this is the end of the new features before 1.0 though, and now it's all about bug fixes and rock solid stability.

The Windows Terminal

So you've downloaded the Windows Terminal...now what?

You might initially be underwhelmed. This is a Terminal, it's not going to hold your hand.

The Documentation is just getting started but you can start here! This would be a great way for you to get involved in Open Source, by the way!

Here's the big new change that is very exciting!

Windows Terminal Command Line Arguments

You may know you can run Windows Terminal with "wt.exe" and this version now supports Command line arguments! Here's an examples to give you a taste:

  • wt ; split-pane -p "Windows PowerShell" ; split-pane -H wsl.exe
  • wt -d .
  • wt -d c:github

At this point you can get as advanced as you want. Make other icons, pin them to the taskbar, have a blast. There's subcommands like new-tab, split-pane, and focus-tab.ter

Other Windows Terminal things to note

Please share YOUR blogs, YOUR profiles, YOUR favorite themes and terminal hacks as well!


Sponsor: Have you tried developing in Rider yet? This fast and feature-rich cross-platform IDE improves your code for .NET, ASP.NET, .NET Core, Xamarin, and Unity applications on Windows, Mac, and Linux.



© 2019 Scott Hanselman. All rights reserved.
     

Spread the love this Valentine’s Day!

$
0
0
You may know that Microsoft Rewards lets you earn points just by searching with Bing or shopping with Microsoft and redeem those points towards gift cards and other items, but did you know that you can also use these points to donate to nonprofits? Here at Bing, we wanted to invite you this Valentine’s Day to share your affection not only with your loved ones, but also with causes close to your heart.

Just join or log in to Rewards today, start earning points to make a difference, and browse our donation options! 
 
 body-post.png
 
We currently have 22 nonprofits available in the US for you to choose from. The minimum nonprofit donation is $1 – redeemed from 1,000 points – and you can also donate in $3 or $5 increments (3,000 and 5,000 points, respectively). We’d also like to call out American Red Cross, as Microsoft will match all your Rewards donations to them through February 29th.

Thanks, and Happy Valentine’s Day!

How Visual Studio Code leverages Azure Pipelines Artifact Caching Tasks to improve CI

Azure Offline Backup with Azure Data Box now in preview

$
0
0

An ever-increasing number of enterprises, even as they adopt a hybrid IT strategy, continue to retain mission-critical data on-premises and look towards the public cloud as an effective offsite for their backups. Azure Backup—Azure’s built-in data-protection solution, provides a simple, secure, and cost-effective mechanism to backup these data-assets over the network to Azure, while eliminating on-premises backup infrastructure. After the initial full backup of data, Azure Backup transfers only incremental changes in the data, thereby delivering continued savings on both network and storage.

With the exponential growth in critical enterprise data, the initial full backups are reaching terabyte scale. Transferring these large full-backups over the network, especially in high-latency network environments or remote offices, may take weeks or even months. Our customers are looking for more efficient ways beyond fast networks to transfer these large initial backups to Azure. Microsoft Azure Data Box solves the problem of transferring large data sets to Azure by enabling the “offline” transfer of data using secure, portable, and easy-to-get Microsoft appliances.

Announcing the preview of Azure Offline Backup with Azure Data Box

Today, we are thrilled to add the power of Azure Data Box to Azure Backup, and announce the preview program for offline initial backup of large datasets using Azure Data Box! With this preview, customers will be able to use Azure Data Box with Azure Backup to seed large initial backups (up to 80 TB per server) offline to an Azure Recovery Services Vault. Subsequent backups will take place over the network.

Diagram showing how Azure offline backup works in the Azure ecosystem.

This preview is currently available to the customers of Microsoft Azure Recovery Services agent and is a much-awaited addition to the existing support for offline backup using Azure Import/Export Services

Key benefits

The Azure Data Box addition to Azure Backup delivers core benefits of the Azure Data Box service while offering key advantages over the Azure Import/Export based offline backup.

  • Simple—No need to procure your own Azure-compatible disks or connectors as with the Azure Import based offline backup. Simply order and receive one or more Data Box appliances from your Azure subscription, plug-in, fill with backup data, return to Azure, and track all of it on the Azure portal.
  • Built-in—The Azure Data Box based offline backup experience is built-into the Recovery Services agent, so you can easily discover and detect your received Azure Data Box appliances, transfer backup data, and track the completion of the initial backup directly from the agent.
  • Secure—Azure Data Box is a tamper-resistant appliance that comes with ruggedized casing to handle bumps and bruises during transport and supports 256-bit AES encryption on your data.
  • Efficient—Get freedom from provisioning temporary storage (staging locations) or use of additional tools to prepare disks and copy data, as in the Azure Import based offline backup. Azure Backup directly copies backup data to Azure Data Box, delivering savings on storage and time, and eliminating additional copy tools.

Getting started

Seeding your large initial backups using Azure Backup and Azure Data Box involves the following high-level steps. 

  1. Order and receive your Azure Data Box based on the amount of data you want to backup from a server. Order an Azure Data Box Disk if you want to backup less than 7.2 TB of data. Order an Azure Data Box to backup up to 80 TB of data.
  2. Install and register the latest Recovery Services agent to an Azure Recovery Services Vault.
  3. Select the “Transfer using Microsoft Azure Data Box disks” option for offline backup as part of scheduling your backups with the Recovery Services agent.
    Screenshot of the Schedule Backup Wizard
  4. Trigger Backup to Azure Data Box from the Recovery Services Agent.
  5. Return Azure Data Box to Azure.

Azure Data Box and Azure Backup will automatically upload the data to the Azure Recovery Services Vault. Refer to this article for a detailed overview of pre-requisites and steps to take advantage of Azure Data Box when seeding your initial backup offline with Azure Backup.

Offline backup with Azure Data Box on Data Protection Manager and Azure Backup Server

If you are using System Center Data Protection Manager or Microsoft Azure Backup Server and are interested in seeding large initial backups using Azure Data Box, drop us a line at systemcenterfeedback@microsoft.com for access to early previews.

Related links and additional content

Azure Firewall Manager now supports virtual networks

$
0
0

This post was co-authored by Yair Tor, Principal Program Manager, Azure Networking.

Last November we introduced Microsoft Azure Firewall Manager preview for Azure Firewall policy and route management in secured virtual hubs. This also included integration with key Security as a Service partners, Zscaler, iboss, and soon Check Point. These partners support branch to internet and virtual network to internet scenarios.

Today, we are extending Azure Firewall Manager preview to include automatic deployment and central security policy management for Azure Firewall in hub virtual networks.

Azure Firewall Manager preview is a network security management service that provides central security policy and route management for cloud-based security perimeters. It makes it easy for enterprise IT teams to centrally define network and application-level rules for traffic filtering across multiple Azure Firewall instances that spans different Azure regions and subscriptions in hub-and-spoke architectures for traffic governance and protection. In addition, it empowers DevOps for better agility with derived local firewall security policies that are implemented across organizations.

For more information see Azure Firewall Manager documentation.

Azure Firewall Manager getting started page

Figure one – Azure Firewall Manger Getting Started page

 

Hub virtual networks and secured virtual hubs

Azure Firewall Manager can provide security management for two network architecture types:

  •  Secured virtual hub—An Azure Virtual WAN Hub is a Microsoft-managed resource that lets you easily create hub-and-spoke architectures. When security and routing policies are associated with such a hub, it is referred to as a secured virtual hub.
  •  Hub virtual network—This is a standard Azure Virtual Network that you create and manage yourself. When security policies are associated with such a hub, it is referred to as a hub virtual network. At this time, only Azure Firewall Policy is supported. You can peer spoke virtual networks that contain your workload servers and services. It is also possible to manage firewalls in standalone virtual networks that are not peered to any spoke.

Whether to use a hub virtual network or a secured virtual depends on your scenario:

  •  Hub virtual network—Hub virtual networks are probably the right choice if your network architecture is based on virtual networks only, requires multiple hubs per regions, or doesn’t use hub-and-spoke at all.
  •  Secured virtual hubs—Secured virtual hubs might address your needs better if you need to manage routing and security policies across many globally distributed secured hubs. Secure virtual hubs have high scale VPN connectivity, SDWAN support, and third-party Security as Service integration. You can use Azure to secure your Internet edge for both on-premises and cloud resources.

The following comparison table in Figure 2 can assist in making an informed decision:

 

  Hub virtual network Secured virtual hub
Underlying resource Virtual network Virtual WAN hub
Hub-and-Spoke Using virtual network peering Automated using hub virtual network connection
On-prem connectivity

VPN Gateway up to 10 Gbps and 30 S2S connections; ExpressRoute

More scalable VPN Gateway up to 20 Gbps and 1000 S2S connections; ExpressRoute

Automated branch connectivity using SDWAN Not supported Supported
Hubs per region Multiple virtual networks per region

Single virtual hub per region. Multiple hubs possible with multiple Virtual WANs

Azure Firewall – multiple public IP addresses Customer provided Auto-generated (to be available by general availability)
Azure Firewall Availability Zones Supported Not available in preview. To be available availabilityavailablity

Advanced internet security with 3rd party Security as a service partners

Customer established and managed VPN connectivity to partner service of choice

Automated via Trusted Security Partner flow and partner management experience

Centralized route management to attract traffic to the hub

Customer managed UDR; Roadmap: UDR default route automation for spokes

Supported using BGP
Web Application Firewall on Application Gateway Supported in virtual network Roadmap: can be used in spoke
Network Virtual Appliance Supported in virtual network Roadmap: can be used in spoke

Figure 2 – Hub virtual network vs. secured virtual hub

Firewall policy

Firewall policy is an Azure resource that contains network address translation (NAT), network, and application rule collections as well as threat intelligence settings. It's a global resource that can be used across multiple Azure Firewall instances in secured virtual hubs and hub virtual networks. New policies can be created from scratch or inherited from existing policies. Inheritance allows DevOps to create local firewall policies on top of organization mandated base policy. Policies work across regions and subscriptions.

Azure Firewall Manager orchestrates Firewall policy creation and association. However, a policy can also be created and managed via REST API, templates, Azure PowerShell, and CLI.

Once a policy is created, it can be associated with a firewall in a Virtual WAN Hub (aka secured virtual hub) or a firewall in a virtual network (aka hub virtual network).

Firewall Policies are billed based on firewall associations. A policy with zero or one firewall association is free of charge. A policy with multiple firewall associations is billed at a fixed rate.

For more information, see Azure Firewall Manager pricing.

The following table compares the new firewall policies with the existing firewall rules:

 

Policy

Rules

Contains

NAT, Network, Application rules, and Threat Intelligence settings

NAT, Network, and Application rules

Protects

Virtual hubs and virtual networks

Virtual networks only

Portal experience

Central management using Firewall Manager

Standalone firewall experience

Multiple firewall support

Firewall Policy is a separate resource that can be used across firewalls

Manually export and import rules or using 3rd party management solutions

Pricing

Billed based on firewall association. See Pricing

Free

Supported deployment mechanisms

Portal, REST API, templates, PowerShell, and CLI

Portal, REST API, templates, PowerShell, and CLI

Release Status

Preview

General Availability

Figure 3 – Firewall Policy vs. Firewall Rules

Next steps

For more information on topics covered here, see the following blogs, documentation, and videos:

Azure Firewall central management partners:


5 steps for boosting your digital transformation with Microsoft 365

New Azure Firewall certification and features in Q1 CY2020

$
0
0

This post was co-authored by Suren Jamiyanaa, Program Manager, Azure Networking

We continue to be amazed by the adoption, interest, positive feedback, and the breadth of use cases customers are finding for our service. Today, we are excited to share several new Azure Firewall capabilities based on your top feedback items:

  • ICSA Labs Corporate Firewall Certification.
  • Forced tunneling support now in preview.
  • IP Groups now in preview.
  • Customer configured SNAT private IP address ranges now generally available.
  • High ports restriction relaxation now generally available.

Azure Firewall is a cloud native firewall as a service (FWaaS) offering that allows you to centrally govern and log all your traffic flows using a DevOps approach. The service supports both application and network level filtering rules and is integrated with the Microsoft Threat Intelligence feed for filtering known malicious IP addresses and domains. Azure Firewall is highly available with built-in auto scaling.

ICSA Labs Corporate Firewall Certification

ICSA Labs is a leading vendor in third-party testing and certification of security and health IT products, as well as network-connected devices. They measure product compliance, reliability, and performance for most of the world’s top technology vendors.

Azure Firewall is the first cloud firewall service to attain the ICSA Labs Corporate Firewall Certification. For the Azure Firewall certification report, see information here. For more information, see the ICSA Labs Firewall Certification program page.
Front page of the ICSA Labs Certification Testing and Audit Report for Azure Firewall.

Figure one – Azure Firewall now ICSA Labs certified.

Forced tunneling support now in preview

Forced tunneling lets you redirect all internet bound traffic from Azure Firewall to your on-premises firewall or a nearby Network Virtual Appliance (NVA) for additional inspection. By default, forced tunneling isn't allowed on Azure Firewall to ensure all its outbound Azure dependencies are met.

To support forced tunneling, service management traffic is separated from customer traffic. An additional dedicated subnet named AzureFirewallManagementSubnet is required with its own associated public IP address. The only route allowed on this subnet is a default route to the internet, and BGP route propagation must be disabled.

Within this configuration, the AzureFirewallSubnet can now include routes to any on-premise firewall or NVA to process traffic before it's passed to the Internet. You can also publish these routes via BGP to AzureFirewallSubnet if BGP route propagation is enabled on this subnet. For more information see Azure Firewall forced tunneling documentation.


Creating a firewall with forced tunneling enabled

Figure two – Creating a firewall with forced tunneling enabled.

IP Groups now in preview

IP Groups is a new top-level Azure resource in that allows you to group and manage IP addresses in Azure Firewall rules. You can give your IP group a name and create one by entering IP addresses or uploading a file. IP Groups eases your management experience and reduce time spent managing IP addresses by using them in a single firewall or across multiple firewalls. For more information, see the IP Groups in Azure Firewall documentation.

Azure Firewall application rules utilize an IP group

Figure three – Azure Firewall application rules utilize an IP group.

Customer configured SNAT private IP address ranges

Azure firewall provides automatic Source Network Address Translation (SNAT) for all outbound traffic to public IP addresses. Azure Firewall doesn’t SNAT when the destination IP address is a private IP address range per IANA RFC 1918. If your organization uses a public IP address range for private networks or opts to force tunnel Azure Firewall internet traffic via an on-premises firewall, you can configure Azure Firewall to not SNAT additional custom IP address ranges. For more information, see Azure Firewall SNAT private IP address ranges.

Azure Firewall with custom private IP address ranges

Figure four – Azure Firewall with custom private IP address ranges.

High ports restriction relaxation now generally available

Since its initial preview release, Azure Firewall had a limitation that prevented network and application rules from including source or destination ports above 64,000. This default behavior blocked RPC based scenarios and specifically Active Directory synchronization. With this new update, customers can use any port in the 1-65535 range in network and application rules.

Next steps

For more information on everything we covered above please see the following blogs, documentation, and videos.

Azure Firewall central management partners:

Announcing the new Bing Webmaster Tools

$
0
0
Over the last few months, we have heard from the webmaster ecosystem that Bing Webmaster Tools user interface is slow and outdated. With our user first focus, we have taken your feedback and have been working on modernizing the tools.  We are delighted to announce the first iteration of refreshed Bing Webmaster Tools portal.



The refreshed portal is being built with key principles of - keeping the design simple with the tools being Faster, Cleaner, more Responsive and Actionable.

We have updated the backend datastore to improve the data extraction and redesigned the user experience to make it more user friendly and intuitive. Keeping the need of users in mind, the portal is also device responsive so that it provides the flexibility to the users to access it across devices.

In the first iteration, the new portal will have 3 key features,
  1. Backlinks - The Inbound Links report in the current portal is integrated with the Disavow links tool to become the new Backlinks report in the refreshed portal.
  2. Search Performance - Page Traffic and Search Keywords reports are also integrated as one and are a part of the new Search Performance report.
  3. Sitemaps - The Sitemaps page is the refreshed Sitemaps page of the current portal
We are releasing the new portal to a select set of users this week and will be rolling out to all users by 1st week of march. To access the new portal, sign-in to Bing Webmaster Tools and navigate to Sitemaps, Inbound Links, Page Traffic or Search Keywords reports and click on the links to open the new portal.


Over the next few months, we will focus on moving all the functionalities to the new portal. During the transition, the users will be able to use the current and new pages simultaneously for a short period. We will be deprecating the functionality from the old portal in a few weeks immediately after its inclusion in the new portal. We will strive to make this transition seamless and exciting for our users.

The Bing Webmaster APIs will stay the same so users using our webmaster API to get their data programmatically do not have to make any changes.

Reach out to us and share feedback on Twitter and Facebook and let us know how you feel about the new Bing Webmaster Tools. If you encounter any issues, please raise a service ticket with our support team.

Regards,
The Bing Webmaster Tools team
 

The new Office app now generally available for Android and iOS

$
0
0

A few months ago, we introduced a new mobile app called Office—a whole new experience designed to be your go-to app for getting work done on a mobile device. It combines Word, Excel, and PowerPoint into a single app and introduces new capabilities that enable you to create content and accomplish tasks in uniquely mobile…

The post The new Office app now generally available for Android and iOS appeared first on Microsoft 365 Blog.

How to install Visual Studio Code on a Raspberry Pi 4 in minutes

$
0
0

Four years ago I wrote how to BUILD (literally compile) Visual Studio Code for a Raspberry Pi ARM machine. Just a few months later in November, community member Jay Rodgers released his labor of love - nightly builds of VS Code for Chromebooks and Raspberry Pi.

If you want to get unofficial builds of Visual Studio Code running on a Raspberry Pi (I know you have one!) you should use his instructions. He has done a lot of work to make this very simple. Head over to http://code.headmelted.com/ and make it happen for yourself, now!

Jay says:

I've maintained the project for a few years now and it has expanded from providing binaries for Pi to providing support and tools to get VS Code running on low-end ARM devices that might not otherwise support it like Chromebooks (which make up about 60% of the devices in schools now).

The project has really taken off among educators (beyond what I would have thought), not least because they're restricted to the devices provided and it gives them a route to teach coding to students on these computers that might not otherwise be there.

Again, Jay is doing this out of love for the community and the work that makes it happen is hosted at https://github.com/headmelted/codebuilds. I'd encourage you to head over there right now and give him a STAR.

There's so many community members out there doing "thankless" work. Thank them. Thank them with a thank you email, a donation, or just your kindness when you file an issue and complain about all the free work they do for you.

I just picked up a Raspberry Pi 4 from Amazon, and I was able to get a community build of VS Code running on it easily!

Open a terminal, run "sudo -s" and then this script (again, the script is open source):

. <( wget -O - https://code.headmelted.com/installers/apt.sh )

Jay has done the work! That's just the apt instructions, but he's got Chrome OS, APT, YUM, and a manual option over at http://code.headmelted.com/!

Thank you for making this so much easier for us all.

Visual Studio Code on a Raspberry Pi 4

Love Raspberry Pis? Here's some fun stuff you can do with the Raspberry that you bought, the one you meant to do fun stuff with, and the one in your junk drawer. DO IT!

Enjoy!


Sponsor: Couchbase gives developers the power of SQL with the flexibility of JSON. Start using it today for free with technologies including Kubernetes, Java, .NET, JavaScript, Go, and Python.



© 2019 Scott Hanselman. All rights reserved.
     
Viewing all 5264 articles
Browse latest View live