Noutați

Extend TMS WEB Core with JS Libraries with Andrew: Tabulator Part 2: Getting Data Into Tabulator

Last time out, we started our adventure into Tabulator, a JavaScript library that provides a grid-style control that you can use in your TMS WEB Core projects. This time out, we’re going to focus on just one aspect – getting data into Tabulator.  The material in this post may also be of interest generally, no matter the grid control you’re using, or even if you’re just looking to get data into your TMS WEB Core application generally.  We’re going to dip our toe into SPARQL, used to query the data behind Wikipedia.  We’ll also show how to quickly set up an XData server as a proxy, both for SPARQL and other remote data sources.  This should be fun! Motivation. Tabulator, and virtually every other JavaScript-based grid control, is expecting to get data in JSON format, where the various components involved along the way have differing levels of rigidity when it comes to the JSON format itself, unfortunately, as will see. Sometimes other formats are supported, like XML or CSV, but JSON is almost always the primary method. This isn’t much of a surprise, of course, as JavaScript and JSON are very closely related to one another. Many interesting remote sources of data can be accessed via a REST API, and will often also, if you ask politely, return data in a JSON format.  All good.  However, formulating requests acceptable to the remote server, and being able to actually get the bits of data out of the JSON that you get back, can sometimes be adventures all their own. And at the same time, there may be issues in terms of how much data you can request from a remote API or how fast it can serve up the data you are interested in.  You might also be required to use an API key to make requests that you absolutely do not want to include in your TMS WEB Core application (or any JavaScript client application). And if you’re accessing multiple remote data sources, you might be multiplying the potential headaches to come. So in this post, we’re going to contact multiple remote data sources. We’re going to use a private API key. And we’re going to address performance aspects of the data we’re using.  Along the way, we’ll also be looking to explore a broader array of data types (images, for example).  And once we finally get hold of some data, we’ll add it to a Tabulator table.  We’ll also see if we can add a few nice finishing touches along the way, to help balance some of the Tabulator content in other upcoming posts. The example we’re going to develop here is a simple one, at least to visualize.  We want to use a date picker to select a birthday.  With that birthday, we want to see a list of all of the actors (movies and TV shows) that share that birthday.  And if we select an actor, we want to see all the movies and TV shows where they had a role.  Sounds easy enough right?  Well, the challenges here are not with TMS WEB Core or with Tabulator, but rather with the complexities of getting data from remote data sources.  But these are all solvable problems.  Just maybe a little more tenacity is required. Be warned, odd […]

Read More

Behind the scenes of Subway Surfers: A Q&A with SYBO

What have been the biggest challenges with scaling up so fast? Mathias: For a smaller studio to be supporting a game with 100 million players is an extreme task. It’s incomprehensible that there was a young team operating such a massive game. We, along with Unity, grew as the industry grew, so everything, including the parameters, were changing quickly.  Our goal from the beginning was to give gamers the best possible feeling when playing Subway Surfers, and we keep this in mind with every step and decision we make. We took over publishing for the mobile platforms in mid-2020. For the last 18 months, we’ve been speed learning how to self-publish.  We’ve improved the game, but there’s always the potential to continue to grow and optimize. When we took over self-publishing, initially we were deactivated on the ad platform, because although we only started with 10% of our player base, it amounted to 10 million players, and they thought we were hackers.   How about the technical challenges? Murari Vasudevan: On the technical side, for iOS and Android, the scale of the number of devices is enormous. For Android, there are more than 20,000 unique devices. When we took over publishing, we needed to ensure that the game ran smoothly on all devices that we targeted.  We had issues with performance exceptions and crashes, and with the application not responding. We dealt with the scale of each of these issues happening live, as well as the need to diagnose it in real-time. In order to not put off our player base, we needed to look at the Unity Profiler, optimize, and work with the loading time. We needed to ensure that the loading time matched or improved upon the original.  To ensure good gameplay on even the least-powerful smartphones, we devised a number of techniques, including batching level geometry to reduce draw calls, rotating coins using C# scripts running on the CPU rather than shaders running on the GPU to further minimize draw calls, minimizing UI redraws and ensuring optimal timing in level generation, and not generating more content than a user can see. The Unity Profiler and Frame Debugger make it much simpler to monitor exactly how all these techniques are working. We also use Unity’s different quality settings to optimize fps and refresh rates on low-end devices. We spent a lot of time tweaking our bootstrapping funnel and optimizing our code to ensure that we were really efficient in our process. We had to have a lot of analytics in place to see what players were doing, how they were going through the various stages, and where they were dropping off. This was the flow that came with taking over so many devices and players, and it was an ongoing challenge we had to confront right from the start.

Read More

Detecting performance bottlenecks with Unity Frame Timing Manager

The Frame Timing Manager is a feature that provides frame-level time measurements like total frame CPU and GPU times. Compared to the general-purpose Unity Profiler and Profiler API, the Frame Timing Manager is designed for a very specific task, and therefore comes with a much lower performance overhead. The amount of information collected is carefully limited as it highlights only the most important frame stats. One main reason for leveraging the Frame Timing Manager is to investigate performance bottlenecks in deeper detail. This allows you to determine what curbs your application performance: Is it bound by the main thread or render thread on CPU, or is it GPU-bound? Based on your analysis, you can take further action to improve performance. The dynamic resolution feature supports fixing detected bottlenecks on the GPU side. You can then increase or reduce rendering resolution to dynamically control the amount of work on the GPU. During development, you can even visualize timing in an application HUD, which allows you to have a real-time, high-level mini Profiler built right in your application. This way, it’s always readily available to use. Lastly, you can use the Frame Timing Manager for release mode performance reporting. Based on collected information, you can send statistics to your servers regarding your application’s performance on different platforms for better overall decision-making.

Read More

The Best Native App Builder Whitepaper You’ll Never Forget

There is a fascinating conversation between Neo and the Oracle in The Matrix Reloaded movie. Neo asks the oracle that if she already knows, how can he make a choice? She cleverly replies because you have already made your choice. You only came to me to try to understand why? Thus, it is strange that we often pick things emotionally and then logically persuade ourselves that we have chosen the best option. As a result, we come to comparison not to find the best option but only to convince, rationalize, or understand our own decision. We are prone to make the same mistake while choosing the best native app builder for development. It is easy to choose between good and evil but choosing between good and better is always challenging. Choosing the best one is always a difficult task because we must consider many things. For example, say there are three Native App Builders on the shelf. A is free but least productive, B is an excellent value for money, and C is overhyped. If we choose anything other than B, that will be poor and against our interests. Also, we will be misusing our financial vote, depriving a great company of developing more excellent products for us. Thus, we need extensive research to finalize a development tool. Fortunately, a new whitepaper helps us by evaluating three Windows application development frameworks, and you can download it for free now. What have we not learned from the DOTCOM bubble? Due to several reasons, it is easy for companies with deep pockets to exaggerate things in the software industry. Firstly, because it is software, one cannot SEE or correctly measure the end product. Secondly, its long-term consequences might become evident only after several years, and thirdly, many decision-makers do not have a total grasp of tech. Thus, they turn towards heuristics and guesstimates. This situation makes a perfect ground for snake oil marketers to join the party and profit from other people’s losses. It happened on an immense scale during the DOTCOM bubble, but it is constantly happening on different levels in the software industry. Carefully reading this Whitepaper might save you from investing in a poor tool. What might they be hiding from you? Why is a software production system designed in the way it is? There are technical and historical reasons, but there might also be some control reasons. Many tools are designed with such a mindset that the more users invest in the system, the stronger the owner company becomes. Thus, individual users lose power and control in the long run, and the parent company gets stronger and stronger, just like social media. Fortunately, it is easy to spot such practices. The more proprietary files you have to ship with the end product, the closed the system is, and the developer will have less control over the product in the long run. What are your criteria for choosing the best native app builder? Many people search for the best free app builder, while others look for the best app builder without coding. Some prefer the fame of the tool, and some like its history. These choices might not be comprehensive. We need specific criteria covering all aspects of the software development life cycle. Our Whitepaper includes all necessary points […]

Read More

GRUI By Sencha vs. React Grid: Which Is Better?

  GRUI by Sencha is a high-performance grid solution for React apps. It can efficiently deal with a massive amount of data. Also, it is very easy to integrate with React. On the other hand, React Grid layout is a powerful library for the front-end framework. It allows you to create grids with absolute control over the layout. Both GRUI by Sencha and React Grid layout offers powerful capabilities. However, which is the better option? In this article, you will find all the details. What Is GRUI By Sencha? Sencha GRUI is a modern enterprise-grade grid solution for React applications that comes with 100+ data grid features. Sencha GRUI can efficiently handle millions of records, and it can load huge amounts of data with incredible speed. Also, It supports large feature sets, including filtering, grouping, and infinite scrolling. Besides, Sencha GRUI allows you to export data in different formats, like CSV, XLS, PDF, etc. On top of that, it is very easy to use. Overall, it has everything that a high-performance, robust grid should have. Why Should You Use GRUI By Sencha? Capable of efficiently handling millions of records Offers large feature sets, including filtering, grouping, infinite scrolling, etc. Provides full customization control Supports data export to different formats, including CSV, TSV, HTML, PDF, and XLS Provides easy UI component integration to the grid Read: Enterprise Ready React Data Grid | GRUI by Sencha What Is React Grid Layout? React Grid layout is a draggable grid layout. It is a container component that allows you to rearrange and resize content panels. React grid layout is responsive. Therefore, you can use it on a variety of devices, including desktops, laptops, and smartphones. Also, React grid layout provides you with absolute control over the layout. As a result, you can easily customize everything. You can change the column and row widths. Also, you can customize how cells are positioned within the grid. Why Should You Use React Grid layout? Compatible with server-rendered apps Allows you to serialize and restore the layout Supports draggable, resizable, and static widgets Allows you to add or remove widgets without rebuilding the grid Supports responsive breakpoints Read: Why You’re Failing At React Grid View GRUI By Sencha vs. React Grid: Which Is The Better Solution? GRUI by Sencha offers a significantly better solution. Unlike React Grid, it can efficiently handle millions of records. Also, GRUI by Sencha has a smaller footprint and payload than other grid solutions. It ensures that mission-critical apps deliver optimal performance. Does GRUI Offer Faster Data Processing Performance Than React Grid? Slow data processing speed can affect the user experience. You don’t want to see your application users frustrated with sluggish performance. In this case, GRUI comes into play. With client and server-side buffered stores, it can load and manipulate large data sets within milliseconds. It is significantly faster and more efficient than other solutions, including React Grid. Does GRUI Deliver Massive Performance Enhancement With Virtual Columns? GRUI offers virtual columns. It lets you render only the columns that are in the current viewport. It provides several advantages. First, you will get a smoother scrolling experience. Second, it can significantly enhance performance for applications that need large numbers of columns. Let’s take a look at this example: As you can see, it’s a […]

Read More

How to A/B test game difficulty with UGS Use Cases

Perfecting your game’s design can be difficult when you can’t see how your players interact with it. By doing A/B testing, you can make design decisions based on how your players really play the game. The Unity Gaming Services (UGS) Use Cases are a collection of samples that implement typical backend game use cases and game design elements, show how to resolve specific development tasks, and highlight the efficiency you can achieve in your game backend by integrating different UGS packages in your project. One of these samples is about A/B Testing.This lets you segment players into multiple test groups in order to determine which version of a specific game element is the most engaging or intuitive to your players. The example we show is testing the amount of XP required for leveling up. This particular example is suited for something like alpha playtesting of a single-player game. But this is just one example, and there are many potential uses for A/B testing with UGS, even in published or multiplayer games.

Read More

GitLab 15: The retrospective

No cloud native, no containers, and no remote work: Those were just a few of the things missing from the technology landscape in 2011 when we launched GitLab 1.0. It’s been a journey, for sure. Here’s a look back at how far we’ve traveled to get to GitLab 15. It started with source code management In the beginning of GitLab there was source code management (SCM)… and that was it. Continuous integration (CI) became part of GitLab because our co-founder Dmitriy Zaporozhets got tired of having to keep the CI servers running separately, so we decided to bring continuous integration into the mix. Even then we knew it didn’t make sense for companies to “DIY” critical parts of their process. That being said, it did feel counterintuitive to bring SCM and CI together, but we tried it anyway. Continuous delivery (CD) eventually evolved out of the CI/SCM integration, but it is crazy to think that when we started GitLab, CI/CD was not really a consideration. DIY DevOps really did exist What people were talking about, though, was DevOps, and specifically DIY DevOps because back then it was completely normal for teams to assemble a bunch of tools and call it done. When we would talk about the importance of fewer tools and more integration, people would turn up their noses. We heard a lot of “different tools for different things” and “many have sharp tools.” Today we know that a DevOps platform increases development speed and release cadences. But back then, gluing together tools was seen as normal. What’s old is new again Back in the day there were lots of tools and also very different programming languages than we reach for today. In the 2014 era, developers often wrote code in Ruby or JavaScript, and kept things layers away from the microprocessor. Over the years, that’s changed drastically. Rust and Go – as just two examples – have brought us back to the processor and reflect today’s modern programming styles. It’s another sign of how drastically things have shifted over time. It wasn’t cloud-y The cloud was in its infancy when GitLab started and at the time we all thought it was probably a great solution for startups or small businesses, but perhaps not something that would ever be in widespread use. Fast-forward to today where most companies run their infrastructures in the cloud. Now it’s widely accepted a cloud native architecture helps teams deliver better software faster and cloud skepticism has drifted away. Security was siloed Security teams, and tools, were completely separate entities when GitLab began and that, of course, made doing something inherently difficult even more so. Devs were asked to fix bugs without any context, process, or knowledge of deployment status, and naturally weren’t very excited about it all. Realizing this, we began slowly adding scans to our CI/CD steps so that security was part of the pipeline and not separate from it. The goal is to let developers and teams deal with security in an incremental way, rather than a large to-do list at the end of the process. And that progress is ongoing. Code review wasn’t integrated Eleven years ago, code review wasn’t that different from security, i.e., it was something done in a distant time and place and without context. Today, merge […]

Read More

Our Privacy Policy has been updated

As part of our commitment to keeping our policies current, we made some updates to our Privacy Policy on June 14, 2022. These updates are intended to clarify our existing data processing activities and to provide information on processing that may derive from new features. Through this update, we continue to provide transparency to our data processing activities, in line with an evolving privacy landscape. Specifically, these policy updates include the following: Clarification about which processing activities apply to each respective GitLab product; Information about when personal data may be collected to verify someone’s identity to enable certain product features; Clarification about what personal data is collected to provide a license and maintain a subscription; Additional information regarding our Service Usage data collection practices, and the inclusion of certain processing activities, such as Event Analytics and Call Recordings; Additional information regarding the purposes for which personal data is collected; Minor updates regarding our legal basis for processing your personal data in the European Union; Updates to our data retention practices for inactive accounts; Clarification about how to delete your personal data at GitLab and how deletion is effectuated for public projects; An additional notice that details our processing and your rights under the California Consumer Privacy Act, including CCPA metrics reporting; Overall, we believe that these updates will empower our users to make informed decisions about their personal data. Please visit the complete text of our Privacy Policy and Cookie Policy to learn more about how GitLab processes personal data and your rights and choices regarding such processing. Sign up for GitLab’s twice-monthly newsletter

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Tabulator Part1: Introduction

So far, we’ve covered some hugely important and popular JavaScript libraries, like jQuery, Bootstrap and FontAwesome.  We’ve also explored other very important but somewhat lesser-known JavaScript libraries, like Luxon and CodeMirror.  And also some considerably smaller and less widely used JavaScript libraries like Interact.js, BigText.js and Showdown.js. Today we’re going to introduce what has quickly become my favorite JavaScript library, Tabulator, which accurately describes itself as “Easy to use, simple to code, fully featured, interactive JavaScript tables.”  Over the next handful of posts we’ll go into far more detail about how to make the most of it in your TMS WEB Core projects. Why Tabulator? Tabulator is neither wildly popular (yet!) nor particularly obscure. And it has perhaps the misfortune of falling into a hugely popular category of JavaScript libraries – defined with terms like grids or tables. If you were to do a Google search for a new JavaScript grid control, you might run through 10 or 20 before Tabulator even comes up on the list. On the other hand, if you search for a JavaScript ‘data grid’ on GitHub, it might come up second or third. But as with any popular JavaScript library category, there are plenty of criteria you can use to filter out which ones might be the best candidates for your projects. When I’m looking for something, these are the kinds of things I typically consider. Price per user or per developer, licensing terms, and so on? Any development activity in the past year?  Does the developer respond to questions? Are there dependencies?  Like jQuery or other JavaScript environments like React or Angular. How good/complete is the documentation?  Is there plenty of example code available? Style vs. substance and needs vs. wants.  Does it just look pretty or is it actually useful? When the dust settles, you might well reach the same conclusion I have, and give Tabulator a try. But even if you find another control that is more suited to your needs, or perhaps you’re already quite happily using another control, there’s still a lot of interesting ground to cover (and some fun examples) when it comes to using any of these kinds of controls in a TMS WEB Core project.  Motivation. The need for a grid-like control in a modern web application is usually self-evident. Displaying data for the user to see in tabular form, along with maybe some options to filter or sort that data in ways that are easy and convenient, is either something you need in your project, or something you don’t. A better question for our purposes might be what the motivation is for a Delphi developer to use a new and potentially unfamiliar JS library.  Particularly when it comes to providing functionality that we might already have decades of experience with, using popular, reliable, capable, and easy-to-use Delphi VCL components of various flavors. Naturally, this need arises when moving to a different environment – the web – where our preferred set of controls may not be as readily available.  At the same time, this shift to the web also brings with it opportunities to change how these kinds of applications are developed, for better and for worse.    TDataSet vs. The Web. When we covered all those JSON examples (see Part 1 and Part 2), one of […]

Read More

Observability vs. monitoring in DevOps

In almost any modern software infrastructure, there is inevitably some form of monitoring or logging. The launch of syslog for Unix systems in the 1980s established both the value of being able to audit and understand what is going on inside a system, as well as the architectural importance of separating that mechanism. However, despite the value and importance of this visibility into system behavior, too often monitoring and logging are treated as an afterthought. There are countless instances of systems emitting logs into a void, never being aggregated or analyzed for critical information. Or infrastructure where legacy monitoring systems were installed a decade ago and never updated to modern standards. Recently, shifts in the operational landscape have given rise to the concept of observability. Rather than expect engineers to form their own assumptions about how their application is performing from static measurements, observability enables them to see a holistic picture of their application behavior, and critically, how a user perceives performance. You’re invited! Join us on June 23rd for the GitLab 15 launch event with DevOps guru Gene Kim and several GitLab leaders. They’ll show you what they see for the future of DevOps and The One DevOps Platform. What is observability? To understand the value in observability, it’s helpful to first establish an understanding of what monitoring is, as well as what it does and does not provide in terms of information and context. At its core, monitoring is presenting the results of measurements of different values and outputs of a given system or software stack. Common metrics for measurement are things like CPU usage, RAM usage, and response time or latency. Classic logging systems are similar; a static piece of information about an event that occurred during system operation. Monitoring provides limited-context measurements that might indicate a larger issue with the system. Aggregation and correlation are possible using traditional monitoring tools, but typically require manual configuration and tuning to provide a holistic view. As the industry has advanced, the concept of what makes for effective monitoring has moved beyond static measurements of things like CPU usage. In its now-famous SRE book, Google emphasizes that you should focus on four key metrics, known as “Golden Signals“: Latency: The time it takes to fulfill a request Traffic: High-level measurement of overall demand Errors: The rate at which requests fail Saturation: Measurement of resource usage as a fraction of the whole; typically focuses on constrained resources While these metrics help home in on a better picture of overall system performance, they still require a non-trivial engineering investment to design, build, integrate, and configure a complete monitoring system. There is considerable effort involved in enumerating failure modes, and manually defining and associating the correct correlations in even simple cases can be time-consuming. In contrast, observability offers a much more intuitive and complete picture as a first-class feature: You don’t need to manually correlate disparate monitoring tooling. An aggregated monitoring dashboard is only as good as the last engineer that built it; conversely, an observability platform adapts itself to present critical information in the right context, automatically. This can even extend further left into the software development lifecycle (SDLC), with observability tooling providing important performance feedback during CI/CD runs, giving developers operational feedback about their code. Ultimately, observability provides more holistic debugging […]

Read More