From the blog

Try out our battle pass sample using Unity Gaming Services Use Cases

For live games, especially multiplayer ones, the server should be the source of truth for most of the players’ data.  This prevents cheating tactics which may allow a player to have an advantage over another, or grant a player items or currencies that would otherwise need to be purchased or earned.  Similarly, in order for seasonal rewards or battle passes to be fair for all players, all data and decisions need to be managed server-side.  In the design of this battle pass sample, Cloud Code (beta) service does most of the heavy lifting in terms of handling the backend infrastructure. Cloud Code allows you to write and run your game logic away from the client. Other tools used in this sample include Cloud Save (beta), which allows you to store player data to the cloud. In this case, it allows the player’s season progress to be stored in a flat key-value system. A battle pass ownership flag for the current season is also stored with Cloud Save. Game Overrides (powered by Remote Config), which lets you create personalized in-game player experiences to determine the content of the current season and battle pass tiers. All of the tier rewards in the battle pass are either currencies or inventory items, which are all managed through Economy (beta). There is also one Purchase set up in Economy for exchanging gems for a battle pass. As with any game where player data is managed by an online back-end, each player will need to sign in to the game. For this, we’re using Authentication (beta). Once the user is signed in, all Unity Gaming Services SDKs will automatically know to send the player’s unique ID with every server request. Here’s how the sample works:  Seasonal reward configuration data is sent from Game Overrides to the client, and is also available in Cloud Code (read only). The game client will use this data to determine the UI. Cloud Code will use this data to determine which rewards to grant a player that claims a valid reward tier. Cloud Save is used to track the player’s progress through the reward tiers. The player will have an array of mutually exclusive tier states with three possible values: Locked, Unlocked, or Claimed. Cloud Save also stores a value indicating whether the player has purchased the battle pass for the current season.

Read More

Unity and Meta Immersive Learning partner to expand VR education and inspire creators

Immersive technologies like virtual reality (VR) and augmented reality (AR) are transforming how people work, socialize, play, and learn. More and more, schools and educators are recognizing the power of these tools to reimagine learning and inspire students to be creators – not just consumers – in the metaverse. The market for extended reality (XR) is expanding, as is the demand for talent to create these immersive digital experiences. According to Burning Glass, AR jobs are projected to grow 172% and VR jobs are projected to grow 170% globally in the next 10 years. As the XR job market broadens, schools find themselves in the position of needing to prepare the future workforce. Even though many schools have rapidly adopted XR technology, enabling their students to create and learn in new ways, equity gaps are growing, too. Most schools, especially those serving underrepresented and low-income communities, still experience barriers to accessing the promise of XR, with cost and readiness being the most prevalent factors. To address this, Unity Social Impact and Meta Immersive Learning have partnered to increase access to AR/VR hardware, high-quality educational content, and resources to prepare educators to teach.  We are excited to partner with Meta Immersive Learning on our shared goals to propel immersive learning and diversify the XR creator workforce. The Create with VR Grant and Higher Ed XR Innovation Grant – two core components of the partnership – will increase access to hardware and software, provide training and preparation for educators, and support educational institutions in creating immersive learning experiences. “At Unity, we believe the world is a better place with more creators in it. This partnership brings that belief to life by ensuring education is accessible to all, regardless of zip code. Every student should have the opportunity to achieve their full potential,” says Jessica Lindl, vice president of social impact at Unity. “As we all know, there has been a major shift not only in the global labor market but also in how we work. Therefore, it’s more important than ever to connect young people with the skills needed for future jobs, as these help chart a clear path from learning to earning opportunities.” John Cantarella, vice president of community and impact partnerships at Meta, says, “As part of our commitment to support the next generation of creators through the Meta Immersive Learning initiative, we’re excited to partner with Unity, a trusted and innovative name in the XR industry. Together, we can empower aspiring creators to learn and grow in the metaverse – removing barriers to immersive technologies and unlocking new opportunities.”

Read More

Compiling TMS WEB Core Projects with the DCC

1. Introduction TMS WEB Core is a framework that allows the user to write JavaScript applications with the Pascal language. Therefore the Pascal code is transpiled to JavaScript. TMS WEB Core supports the three IDEs Embarcadero RAD Studio with Delphi, Microsoft Visual Studio Code and Lazarus. This article is primarily concerned with the Delphi IDE and shows how to use hidden features to accelerate the work with the IDE. 2. Pas2JS Transpiler TMS WEB Core uses the Pas2JS transpiler in order to convert the Pascal code to JavaScript. In the following the transpiler is abbreviated with Pas2JS. Pas2JS is basically compatible with the Delphi language, but there are exceptions. Because of these exceptions the Delphi IDE sometimes does not work correctly with some basic features like Code Insight. 2.1. Code Insight Code Insight is a feature of the Delphi IDE that helps the user to write his code faster. Code Insight includes Code Completion, Parameter Completion, Code Hints, Tooltip Expression, Tooltip Insight, Go To Definition, Block Completion, Class Completion and Error Insight. It can only work if the current project is a Pascal project that can be compiled with the Delphi Compiler. 2.2. Delphi Compiler The Delphi Compiler is the so called DCC (Delphi Commandline Compiler) which normally compiles Delphi projects. In case of a TMS WEB Core project things are different. A TMS WEB Core project is compiled with Pas2JS, not with the DCC. This behavior has consequences for Code Insight, which only works for projects that can be and are compiled with the DCC. 2.3. Compiling WEB Core projects with the DCC TMS WEB Core has a wonderful hidden feature that allows the user to compile WEB Core projects additionally with the DCC. This feature can be enabled in ToolsOptionsTMS WEBOptions. If Show the DCC button is set to True a tool button will appear in the tool bar, which allows to toggle between the DCC and Pas2JS. Here, Pas2JS is enabled. A compiled project looks like the following: And here the DCC is enabled: If the project is compiled with the DCC, code completion looks like this: 2.4. Compilation Speed Since the DCC is one of the fastest compilers in the world, it compiles much faster than Pas2JS. Hence it is a good practice to compile a WEB Core project with the DCC during coding. The user can always check with a click on Ctrl+F9 whether his code is compilable. 3. Special Language Features Pas2JS introduces a few special language features that cannot be compiled directly with the DCC. In the following it will be shown how they can be written in a DCC compatible way. 3.1. ASM Pas2JS allows to write plain JavaScript in so called “ASM blocks”. Here is an example: class function TIntegerHelper.ToString(const AValue: Integer): string; assembler;asm  return AValue.toString();end; With the help of the internal compiler defines WEBLIB and PAS2JS you can write this code in the following way: class function TIntegerHelper.ToString(const AValue: Integer): string;begin{$IFDEF WEBLIB}  asm    return AValue.toString();  end;{$ELSE}  Result := IntToStr(AValue);{$ENDIF}end; This code now compiles and works with the DCC and Pas2JS. 3.2. WEBLib.Utils The counterpart of many utility functions from Delphi’s System.SysUtils unit are currently defined in the Pas2JS unit WEBLib.Utils. With the upcoming version 2.2 of Pas2JS they will be moved to SysUtils. That is why it is currently […]

Read More

20 Fun Facts About The Best Low Code Platforms

Businesses are growing, and new challenges are rising. On the other hand, there is pressure to reduce costs by automating the tasks. The best solution is to lean on custom applications, but it takes a lot of time and is not affordable for most businesses. But do not make up your final decision about this trend. Here we present you collections of facts and information about the best low code platforms where you can grasp a new level of information about them. Solving today’s challenges means working smarter, not harder This ever-changing digital world has new ideas and practices every day, and with increasing demand for IT, businesses are required to take their things to the next level with these innovations. But it takes lots of engineering hours and overwork. Companies are overwhelmed with different software and application requirements and getting hard to fulfil all the software necessities. This is where low-code or no-code app development comes the saving. The learning curve of low code development tools is modest. Because of this, we see all these trends around no-code development tools for creating data-oriented apps or e-commerce web apps. These more rigid solutions provide a set of blocks to create a package out of it with quite limited customization options. This has both pros and cons. Software engineers can spend most of their time creating business-critical solutions for further automation when it comes to benefits. On the other hand, by entirely relying on these pre-packaged low-code tools, you might not get the results you expect, or run into intractable issues when you need something with more customization or is incapable of scaling-up like as your needs grow. Picking the right low-code solution is as important to efficiency and future growth as is recognizing that low-code is the smart choice for you to make in the first place. What is the future of the Low-Code Market? Business insiders are talking about the future of the low-code market. According to [Gartner], by 2024, low-code application development will be responsible for more than 65% of all application development activity. For instance, most online shops and e-commerce websites are built with low-code development tools. The lower level of technical knowledge required to out together an online commerce solution using low-code techniques has revolutionized e-commerce site creation. Until the arrival of this kind of low-code solution website owners required assistance from web developers with very specific – and expensive – development skills in order to create their online shopping solutions. Now it’s arguable that a business owner possessed of only a modest technical knowledge can create a reliable and relatively secure online shopping destination. This booming industry will probably generate revenue of more than $185 billion by 2030, climbing from $10 billion in 2019 and envisioned to progress quickly. [GlobeNewswire] Of course, low-code platforms do not fill up the software developer gap but can make things better by giving flexibility for business owners to start their business online. If you really have a significant user audience, maybe you can create more customized and scalable products with the best development tools like RAD Studio. What is RAD Studio? RAD Studio is one of the best developer ecosystems. You can use Delphi and C++ with FireMonkey or VCL frameworks to create cross-platform and native applications for Android, […]

Read More

UI Toolkit at runtime: Get the breakdown

Let’s clarify how UI Toolkit can ensure smoother workflows while creating UI.  Creating UI in collaboration with artists can be a complex task. While the artist is editing the Canvas to add colors and material, the developer adds scripts, behaviors, and OnClick listeners. Afterward, when merging occurs, merge conflicts can arise and lead to issues that require swift resolution. UI Toolkit prevents such merge conflicts by having the artist work on the UXML and USS files while C# handles all of the logic. For example, button handling is done with C# only, which queries the button using its name, and adds logic without editing any UXML or USS files. Not only does this process ease merging, it simplifies future style changes. For instance, if all project fonts suddenly had to be changed, you wouldn’t need to go through each asset, one by one, to edit the text settings. This avoids tedious work that can lead to oversight issues – and the bigger the game, the more complicated this becomes. With UI Toolkit, Panel Settings hold all of the text settings. So to change the fonts under a UI Document, you only need to edit those Panel Settings. Although Editor scripts can similarly assist with UGUI, UI Toolkit’s framework automatically handles this process. A Visual Element is the base class of every UI Toolkit element: buttons, images, text, etc. You can think of it as the GameObject of UI Toolkit. Meanwhile, UI Builder (Window > UI Toolkit > UI Builder) is a visual tool that helps create and edit interfaces without writing code. This is useful for artists and designers alike, as it allows them to visualize the UI as it’s being built. As the premiere tool for people already familiar with web technologies, UI Toolkit also improves collaboration between artists and developers by separating logic and style to refine organization and avoid file conflicts. While UI Builder takes care of the elements’ positions, style, and design, code can be handled in a distinct section of the project to query the parts of UXML that it needs, and connect logic to it.

Read More

Metaverse Minute: Leaving no trace with digital twins

When our team at Unity says, “We believe the world is a better place with more creators in it,” it isn’t just a catchy tagline – we mean it. To effect change at scale, we need to come up with solutions across a wide array of sectors, fueled by creative thought. And this is especially critical in the case of climate change. Climate change is a global threat that we believe real-time 3D creators can play a paramount role in combating. For Earth Day, we want to spotlight Unity’s digital twin technology and how its application is mobilizing communities and motivating change. We’re excited to showcase the way that Unity’s platform is empowering the business community to take action on climate change across three key industries: construction, fashion, and events.

Read More

GitLab.com is moving to 15.0 with a few breaking changes

GitLab 15.0 is coming to GitLab.com. Along with the exciting new features, it also includes planned deprecations because it is our major version release for 2022. We try to minimize breaking changes, but some changes are needed to improve workflows, performance, scalability, and more. These changes will go live on GitLab.com sometime between April 23 – May 22, through our daily deployments, leading up to the official release of 15.0 on May 22. Keep reading to learn more about these important changes. GitLab 15.0 for self-managed users will also be released on May 22. Jump to the list of breaking changes in each stage by clicking below: Manage Audit events for repository push events Announced in 14.3 Audit events for repository events are now deprecated and will be removed in GitLab 15.0. These events have always been disabled by default and had to be manually enabled with a feature flag. Enabling them can cause too many events to be generated which can dramatically slow down GitLab instances. For this reason, they are being removed. External status check API breaking changes Announced in 14.8 The external status check API was originally implemented to support pass-by-default requests to mark a status check as passing. Pass-by-default requests are now deprecated. Specifically, the following are deprecated: Requests that do not contain the status field. Requests that have the status field set to approved. Beginning in GitLab 15.0, status checks will only be updated to a passing state if the status field is both present and set to passed. Requests that: Do not contain the status field will be rejected with a 422 error. For more information, see the relevant issue. Contain any value other than passed will cause the status check to fail. For more information, see the relevant issue. To align with this change, API calls to list external status checks will also return the value of passed rather than approved for status checks that have passed. OAuth implicit grant Announced in 14.0 The OAuth implicit grant authorization flow will be removed in our next major release, GitLab 15.0. Any applications that use OAuth implicit grant should switch to alternative supported OAuth flows. OAuth tokens without expiration Announced in 14.8 By default, all new applications expire access tokens after 2 hours. In GitLab 14.2 and earlier, OAuth access tokens had no expiration. In GitLab 15.0, an expiry will be automatically generated for any existing token that does not already have one. You should opt in to expiring tokens before GitLab 15.0 is released: Edit the application. Select Expire access tokens to enable them. Tokens must be revoked or they don’t expire. OmniAuth Kerberos gem Announced in 14.3 The omniauth-kerberos gem will be removed in our next major release, GitLab 15.0. This gem has not been maintained and has very little usage. We therefore plan to remove support for this authentication method and recommend using the Kerberos SPNEGO integration instead. You can follow the upgrade instructions to upgrade from the omniauth-kerberos integration to the supported one. Note that we are not deprecating the Kerberos SPNEGO integration, only the old password-based Kerberos integration. Optional enforcement of PAT expiration Announced in 14.8 The feature to disable enforcement of PAT expiration is unusual from a security perspective. We have become concerned that this unusual feature could create […]

Read More

GitLab is now an approved SLP vendor in California

GitLab is now an approved vendor under the Software Licensing Program (SLP) with the state of California. This contract allows state and local agencies, including educational institutions in California, to purchase GitLab software licenses at an agreed-upon discount, reducing costs and streamlining the procurement process. Under the contract, agencies will have greater access to GitLab’s complete DevOps solution, which empowers organizations to deliver software faster and more efficiently. Established in 1994, California’s SLP is managed by the Procurement Division of the Department of General Services. The program provides government agencies and institutions with discounted rates for software licenses and upgrades, reducing the need for individual departments to conduct repetitive acquisitions. “There’s an exciting opportunity for public sector agencies to benefit from automated DevOps practices,” says Bob Stevens, GitLab’s area vice president for Public Sector Federal. “This contract makes it simpler and more cost-effective for agencies to adopt The DevOps Platform, and deliver more resilient and efficient applications while keeping security at the forefront.” GitLab believes that this contract, which makes The DevOps Platform more accessible and cost-effective, will expedite the broader adoption of DevOps in the public sector. GitLab’s single application will enable greater collaboration within public sector agencies, allowing teams to partner on planning, building, securing, and deploying software. To streamline the process, GitLab will work with channel partners including Acuity Technical Solutions, Launch Consulting and Veteran Enhanced Technology Solutions. “Public sector agencies are under tremendous pressure to transform and streamline their software development processes,” said Michelle Hodges, GitLab’s vice president of global channels. “We’re proud to extend the power of our platform to a new network of customers via trusted channel partners and to help evolve the ways in which they collaborate on and deliver software.” “Find out how state and local agencies in California can purchase @gitlab licenses at a discount.” – Click to tweet

Read More

How the DORA metrics can help DevOps team performance

Accelerated adoption of the cloud requires tools that aid in faster software delivery and performance measurements. Delivering visibility across the value chain, the DORA metrics streamline alignment with business objectives, drive software velocity, and promote a collaborative culture. Software delivery, operational efficiency, quality – there is no shortage of challenges around digital transformation for business leaders. Customer satisfaction, a prominent business KPI, has paved the way for experimentation and faster analysis resulting in an increased volume of change in the software development lifecycle (SDLC). Leaders worldwide are helping drive this culture of innovation aligned with organization goals and objectives. However, it is not always about driving the culture alone; it is also about collaboration, visibility, velocity, and quality. Cloud computing and microservices are driving the cloud-first approach for software delivery, helping to scale them independently, and allowing teams to move faster. But, without DevOps, the team doesn’t have the underlying core to move fast efficiently. DevOps has the power to enable the smallest changes that can have great effects. This brings us to the question – how do you measure velocity and impact? Or how do you assess quality, and ensure that it is not hampered by velocity? The latter would be what is commonly referred to as technical debt. A continuous journey needs continuous improvement Any improvement starts with measurement. Measuring and optimizing DevOps practices improves developer efficiency, overall team performance, and business outcomes. DevOps metrics demonstrate effectiveness, shaping a culture of innovation and ultimately overall digital transformation. In the Accelerate State of DevOps 2021 report by the DevOps Research and Assessment (DORA) team at Google Cloud, which draws insights from 7 years of data collection and research, four metrics are the key to measure software delivery performance. What are these metrics? Deployment Frequency Lead time for changes Time to restore service Change failure rate Deployment Frequency Let’s start with the velocity of development. Deployment frequency measures how often the organization deploys code to production or releases it to end users. This metric borrows from lean manufacturing concepts, wherein small multiple batch sizes are the preferred approach for higher efficiency and more rapid adjustments. Lead time for changes Now comes the extent of automation in your processes. Lead time for changes measures the time needed to take a committed code to successfully run in production. This is one of the two metrics with significant variance in the data. Time to restore service This represents a business’ capacity. Time to restore service measures the time needed to restore services to the level they were previously, in case of an incident. Here too we see significant variance in the data. Change failure rate And finally, we take a look at quality. Changes which cause a failure in the system – a deployment failure, an incident, a rollback or a remedy – all contribute to measuring the change failure rate. Driving visibility into the DevOps lifecycle Recently, Zoopla used DORA metrics to boost deployments and increase automation. Understanding the root cause of their problems helped them make informed adjustments in their process workflows, automation, tools, and more. They recognized the value of using a single platform to overcome roadblocks in velocity and innovation. This brought added visibility into their system which helped improve measurement and analytics. Our 2021 Global DevSecOps Survey shows […]

Read More

GitLab’s DevOps platform enables Tangram Vision’s engineering team to succeed at remote work

On March 14, 2020, Tangram Vision CEO Brandon Minor flew from Colorado into the Bay Area to meet with COO Adam Rodnitzky. The two had just launched Tangram Vision, the company they co-founded to make sensors simpler for robotics, drones, and autonomous vehicles. Their plan was to, each month, alternate working at each other’s location. However, that week, the Covid-19 pandemic lockdown began, forcing them to scrap that plan and figure out how to successfully collaborate from afar. “We didn’t see each other in person again for a very long time. That kicked off our remote work experience,” Minor says. The Tangram Vision engineering team started using GitLab’s DevOps platform, which enabled them to work together without missing a beat. “GitLab was a key tool that allowed us to work really fluidly in a remote context,” says Minor. “Our engineering team has placed GitLab at the core of our remote workflow because it reinforces our values and perspectives around working well remotely.” The Tangram Vision Platform takes care of complex perception tasks like sensor fusion, calibration, and diagnostics built on a scalable data backend that allows engineers to track, optimize, and analyze every sensor in their fleet. Tangram Vision’s SDK includes tools for rapid sensor integration, multi-sensor calibration, and sensor stability, saving robotics engineers months of engineering time. Supporting complex collaboration Perception systems are notoriously hard to get up and running and then maintain over time because of important lower-level activities like sensor integration and calibration. “We make sure all the sensors’ data is running smoothly, everything’s working together perfectly to basically a plug-and-play level. And then we enable the developers working on top of that to monitor and correct their system over time,” Minor says. Tangram Vision has just launched a user hub that functions as a centralized sensor data center. The user hub joins their multi-sensor calibration module, as well as a multiplexing module that maintains stream reliability for all connected sensors. Developers can access a starter set of perception development tools (Tangram Vision Platform – Basic), which will be available on an open-source hub. Much of the initial user feedback will come through and be managed within repositories hosted on GitLab, both public and private, Minor says. GitLab as a core for code The engineering team has evaluated other platforms, according to Greg Schafer, senior web architect. “We’ve looked around but we’ve been very turned off by them for one reason or another. We really haven’t swayed in wanting to use GitLab as our core for code,” Schafer says. The team uses GitLab to manage branches and merge requests (MRs), boosting efficiency and control. “We were having a bit of a struggle early on managing the short-term flow. It was hard to put down tasks to paper. So, I dove deep into GitLab to see how it could help us there. And now that’s what we use. GitLab is my product management tool,” Minor says. The alternative, siphoning MRs into tools like Notion and Slack, would have been too cumbersome. “Having code-focused discussions in those places would’ve been very awkward vs. our current orientation of having those discussions in GitLab. Having that history of MRs and threads has been very useful,” Schafer says. Doing all of the code reviews in the MR itself builds a paper trail […]

Read More