Announcement

Maximize Unreal Engine Development with the Latest Integrations in Visual Studio 2022

Since our announcement last month, the team has been working hard on building a new round of Unreal Engine integrations. Today, we are happy to show you the next set of features that can level up your game development productivity. In this article, you will learn about how to stream Unreal Logs, see Unreal Header Tool warnings in Visual Studio, and discover how you can be more productive working with HLSL files. All features mentioned below are available in the latest Visual Studio 2022 Preview. Curious to see these features in action? I chatted with Leslie Richardson in a special edition of Visual Studio Toolbox to demo many of the recently available game dev features in Visual Studio. Setting Up Unreal Engine Integrations Unreal Engine integrations will only show up when you are working on an Unreal Engine project. To ensure these features are active, double check that “IDE support for Unreal Engine” component is enabled in the “Game development for C++” workload in the VS Installer. Stream Unreal Engine Log We spoke with Unreal Engine developers and discovered the frustration of having to switch between tasks. Furthermore, this was particularly painful when they need to frequently switch between the UE editor and Visual Studio. As part of the continued efforts to reduce this pain, we are happy to introduce the ability to see your Unreal Engine logs without leaving Visual Studio. Upon pressing F5, Visual Studio will stream Unreal Engine logs to the UE Log window. To see the logs from the Unreal Engine Editor, click View -> Other Windows -> UE Log. When the Visual Studio Debugger is attached to your game, logs are streamed automatically. Alternatively, you can enable cross-process capturing by pressing the “Record” button. As a result, you can stream logs even when the debugger is not attached. To filter your logs, click on the “Categories” or “Verbosity” dropdowns This feature is currently an experimental feature, we would greatly appreciate feedback from you. Leave your thoughts by commenting on Unreal Engine Log Feedback. Code Analysis for Unreal Engine Code Analysis is an important part of the software development workflow. By creating opportunities to identify potential errors prior to compilation, you can save valuable time in your developer inner-loop. Today, we are adding the first of many Unreal Engine specific Code Analysis checks designed to make you more productive. In Visual Studio, you can now see warnings and errors generated by the Unreal Header Tool. Upon saving a file, Visual Studio will run the Unreal Header Tool in the background. Next, Visual Studio will display any warnings or errors in the Error List or as a purple squiggle in the editor. For additional information, please visit the documentation page for Unreal Header Tool. This feature is off-by-default. To enable the feature, please go to “Tools -> Options -> Environment -> Preview Features” and check the box by “Code Analysis with Unreal Header Tool (C++)”. We look forward to hearing from you about UE Code Analysis. Please leave feedback by commenting in the UE Code Analysis Feedback Ticket. Stay tuned for more Code Analysis checks in the upcoming previews. Creating shaders in game development is an important workflow for game developers. We worked with Tim Jones, the author of the popular HLSL Tools extension, to bring syntax […]

Read More

Even faster builds with Incredibuild 10 and Visual Studio 17.6 Preview 3

Note: This post was co-authored with Incredibuild’s Director of Product Marketing, Yohai West. We are pleased to announce that Visual Studio version 17.6 Preview 3 includes Incredibuild’s most advanced developer acceleration platform: Incredibuild 10. This release includes several notable, new features that empower teams to speed up the development process: Patent-pending Build Cache technology allows developers to cache build outputs so that they can be reused by all team members. A smart and flexible enterprise license management mechanism, managed via a new web-based Coordinator user interface. Incredibuild Cloud optimization automatically manages the best mix of on-demand and spot resources, enabling organizations to use smaller and more affordable machines, while maintaining optimal performance and cost. In this post we’ll detail how these features can improve your daily development. Build Cache – cache what you can; distribute the rest Incredibuild 10’s most significant addition is its Build Cache technology. Incredibuild breaks down development processes into smaller tasks that can be executed independently, and Build Cache saves time and resources by reusing the cached outputs for previously executed tasks. Build Cache extends the incremental builds you are already likely familiar with by providing access to the outputs that your entire team has created. This means that an incremental build only has to build your changes and can rely on the cache when you have merged a teammate’s unrelated changes into yours. Additionally, temporarily working on a different branch won’t cause a large build when you switch back to your original branch. For any task outputs that are not in the cache, the Incredibuild Grid can distribute those tasks across its pool of compute cores. The Incredibuild Grid can allocate a pool of machines to meet the needed capacity for the tasks, including on-premises machines and static and on-demand spot instances provided by Incredibuild Cloud. The machines in the grid don’t need a compiler to be installed or the code to be present, as the Incredibuild Grid takes care of it all. Once those tasks are completed and cached using Build Cache, they never have to be executed again, dramatically reducing build times for your entire team. These features working in tandem means that you can cache what you can; distribute the rest. Build from home without impacting your speed Working from home affects build speeds due to limited upstream bandwidth. Build Cache lets you rely more on downstream bandwidth, giving you greater speed and better performance when starting new builds. With Build Cache you can reuse previous build data stored on your local machine to dramatically reduce build times without impacting your bandwidth. Deployment Examples Build Cache can be deployed in different ways depending on how your Clients connect to Endpoints. A single Client can function as its own Endpoint (local), multiple Clients can connect to a single Endpoint (shared), and more complex or dynamic deployments are possible as well. Local Uses the same Initiator Agents to host the Build Cache Endpoint and Build Cache Client. This means that each agent can only benefit from the cache of builds that were previously run on the same machine. This can be ideal if you are not sharing code with other developers, or if you are working from home with limited bandwidth. Shared Uses a single Agent to host the Build Cache Endpoint. Multiple Initiator […]

Read More

Fill in the ISO C++ Developer Survey

The ISO C++ developer survey runs every year and is a great help to us in prioritizing work based on what the community cares about. It only takes about 10 minutes to complete and closes tomorrow, so please take the time to fill it out.   Sy Brand C++ Developer Advocate, C++ Team

Read More

Documentation for C++20 Ranges

C++20 introduced Ranges to the standard library: a new way of expressing composable transformations on collections of data. This feature adds a huge amount of expressive power and flexibility to C++. As with many concepts in C++, that power comes with new concepts to learn, and some complexity which can be difficult to navigate. One way of taming that complexity is through complete, clear, comprehensive documentation. Christopher Di Bella and Sy Brand (one of the co-authors of this post) presented their ideas for C++ documentation in the era of concepts in their CppCon 2021 talk. Tyler Whitney (the other co-author, and manager of our C++ documentation at Microsoft) has expanded these ideas and exhaustively documented the range adaptors available in the standard library for you. To check it out, go to on Microsoft Learn. In this post, we’ll talk through some of the complexity of using and documenting Ranges, and outline the principles behind the documentation which Tyler has written. But first, a quick introduction to Ranges for those of you who are not familiar with the feature. What are Ranges? At a high level, a range is something that you can iterate over. A range is represented by an iterator that marks the beginning of the range, and a sentinel that marks the end of the range. The sentinel may be the same type as the begin iterator, or it may be different, which lets Ranges support operations which simple iterator pairs can’t. The C++ Standard Library containers such as vector and list are ranges. A range abstracts iterators in a way that simplifies and amplifies your ability to use the Standard Template Library (STL). STL algorithms usually take iterators that point to the portion of the collection that they should operate on. For example, you could sort a vector with std::sort(myVector.begin(), myVector.end());. However, carrying out an algorithm across a range is such a common operation, we’d rather not have to retrieve the begin and end iterators every time. With ranges, you can instead call std::ranges::sort(myVector);. But perhaps the most important benefit of ranges is that you can compose STL algorithms that operate on ranges in a style that’s reminiscent of functional programming. Traditional C++ algorithms don’t compose well. For example, if you wanted to build a vector of squares from the elements in another vector that are divisible by three, you could write something like: std::vector input = { 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 }; std::vector intermediate, output; std::copy_if(input.begin(), input.end(), std::back_inserter(intermediate), [](const int i) { return i%3 == 0; }); std::transform(intermediate.begin(), intermediate.end(), std::back_inserter(output), [](const int i) {return i*i; }); With ranges, you can produce a range with the same values without the intermediate vector: // requires /std:c++20 std::vector input = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10}; auto output = input     | std::views::filter([](const int n) {return n % 3 == 0; })     | std::views::transform([](const int n) {return n * n; }); Besides being easier to read, this code avoids the memory allocation that’s required for the intermediate vector and its contents. It also allows you to compose two operations. In the preceding code, each element that’s divisible by three is combined with an operation to square that element. The pipe (|) symbol chains the operations together and is read […]

Read More