From the blog

5 Examples Of The Best Low Code Platforms

Thanks to the best low code platforms, it is now possible to develop applications with less effort. These won’t let you burn your budget, wait for days/ months, or hire a number of engineers. Low-code platforms help firms optimize their software development process. They provide various easy visual tools. According to Gartner, 65 percent of application development projects will use low-code development by 2024.[1] The importance of applications in our daily lives is certainly unprecedented. They are crucial in both personal and professional life. Low-code platforms are a beneficial investment for many corporate users. Businesses that want to grow must find new ways to increase their production. Investing in low-code platforms could be a more current approach to this problem. What are low code platforms and what are the benefits? A low code platform allows business users to develop applications without writing code. This makes it easier to create custom applications that meet specific business needs. Low code platforms have become increasingly popular in the last few years. These platforms are useful for companies that have limited knowledge of coding. Also, for companies who don’t have the resources to hire teams of developers to build applications from scratch. Lowering the barrier to entry The biggest benefit of low code is that it lowers the barrier for entry-level software development even further. There’s no need for developers to write code. Anyone with basic skills in user interface design can create software on a low code platform. This means that businesses can build an application without having full-time developers on staff.  Increased speed and agility Low code allows organizations to move faster than before. They allow less-expereinced users to create software directly from their business requirements. So, instead of designing and developing, organizations can quickly move from prototype through testing. This often happens within days or weeks instead of months. More reliable and scalable Low code platforms deliver more reliable applications. They allow users to create applications that are easy to use, easy to change, and easy to maintain. The apps created with low code platforms are easy to scale compared to ones created with traditional programming languages like Java or C++. Greater resilience and control Data is stored in the application while using standard programming languages like Java or C++. It’s still saved externally in a database or cloud storage provider like Amazon S3 or Google Cloud Storage using a low code platform. This means you may keep your data private if necessary (for example, if it contains sensitive personal information) while maintaining complete control over who has access to it. Five examples of the best low code platforms Each low code platform has a distinct approach. This raises the question, do you have to choose one, or are there multiple platforms that you can select from? Here are the top 5 low code platforms that you can choose.  1. RAD Studio RAD Studio is a software development suite. It enables developers to create software for Windows, macOS, iOS, Android, and Linux. Development environment The RAD Studio development environment is easy to use. It provides all the tools to develop cross-platform applications. It includes a code editor, Integrated Development Environment (IDE), database tools, web server, database connectivity, reporting tools, etc. You can build desktop apps with native controls by using C++ […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Tabulator Part 3: Viewing Data in Tabulator

In this third stop on our Tabulator adventure, we’re going to focus mostly on the options available for how data is displayed.  But in order to help narrow our focus a little further, we’re going to take the TMS WEB Core project from last time, what we were calling ActorInfo, and explore ways we can view the data we have available, covering many Tabulator options along the way.  Styling and theming a modern web application potentially involves some amount of CSS work, so we’ll cover a bit of that as well.  And to keep it interesting for those not all that keen on Tabulator specifically, we’ll also cover how such a TMS WEB Core app might be deployed in a production setting. Motivation. As we discussed in the first of these Tabulator posts, having a grid control that does what you want, as a developer, makes for an enormously powerful tool. And for web applications, what developers are often after is the ability to customize, as much as possible, anything that is visible to the user.  This may arise from a need for a responsive interface that is accessible to everyone on every device. Or it may come from a desire to apply a specific style or theme, including color, logos, iconography, and that sort of thing.  Or some level of customization may be needed to address certain mechanical aspects of the interface or the underlying data.  Or it can be any combination of these, or other considerations entirely.  Point being, more options for customization are generally better for the developer.  Better still if there are reasonable defaults to start with and a consistent approach to customization that is not overly difficult to implement. The approach I’m going to take here is perhaps a little less organized than I’d like but reflects more accurately how this has come together.  We’ll start with where we ended up last time, and then systematically make changes to implement whatever customization is desired, outlining the steps and the thought-process along the way.  By the time we’re done today, we’ll have a pretty functional app, deployed and ready for users. And while I don’t expect anyone to particularly agree with my styling or theming or layout choices, the main takeaway should be, as usual, that you’ve got options!    Starting Point Two Disclaimers.  Just a couple of things to point out before we get too immersed in our work here.  First, there countless ways a developer can choose to implement any particular bit of functionality. And the same developer, facing the same choices, in the same app, may even implement the same thing in different ways.  And there are some examples of that on display here.  Sometimes, this is because I learned something new and haven’t gone back and updated the original code.  Sometimes, it’s because I’m lazy and cut and paste code where it isn’t really important (code executed infrequently, say), but might spend more time on the same thing in another spot where it is more important (code executed frequently in a loop, for example).  So don’t be too harsh when looking at any of this code.  I’ve tried to clean up the worst examples, but I’m sure some are lingering still.  Case in point, in the XData application, in the service endpoint, […]

Read More

Break the black box of software delivery with GitLab Value Stream Management and DORA Metrics

Our customers frequently tell us that despite being very effective DevOps practitioners, they still struggle to build a data-driven DevOps culture. They find it especially hard to answer the fundamental question: What are the right things to measure? This becomes more challenging in enterprise organizations when there are hundreds of different development groups, and there’s no normalization between how things are done or measured. Because of this, we see a strong interest from customers for metrics that would allow them to standardize between teams and benchmark themselves against the industry. Value Streams Analytics helps you visualize and manage the DevOps flow from ideation to customer delivery. What Are DORA Metrics? With the continued acceleration of digital transformation, most organizations realize that technology delivery excellence is a must for long-term success and competitive advantage. After seven years of data collection and research, the DORA’s State of DevOps research program has developed and validated four metrics that measure software delivery performance: (1) deployment frequency, (2) lead time for changes, (3) time to restore service and (4) change failure rate. In GitLab, The One DevOps Platform, Value Stream Analytics (VSA) surfaces a single source of insight for each stage of the software development process. The analytics are available out of the box for teams to drive performance improvements. What does DORA bring to Value Stream Analytics? Value Stream Analytics (VSA) measures the entire journey from customer request to release and automatically displays the overall performance of the stream. Each stage in the value stream is transparent and compliant in a shared experience for everyone in the company. This makes the VSA the single source of truth (SSoT) about what’s happening within the entire software supply chain, with DORA’s metrics as the key measure of the value stream outputs. How do Value Stream Analytics work? Value stream analytics measures the median time spent by issues or merge requests in each development stage. As an example, a stage might begin with the addition of a label to an issue and end with the addition of another label: Value stream analytics measures each stage from its start event to its end event. For each stage, a table list displays the workflow items filtered in the context of that stage. In stages based on labels, the table will list Issues, and in stages based on Commits, it will list MRs: The VSA MR table provides a deeper insight into stage time breakdown. The tables provide a deep dive into the stage performance and allow users to answer questions such as: How to easily see bottlenecks that are slowing down the delivery of value to customers? How to reduce the time spent in each stage so I can deliver features faster and stay competitive? How can we develop code faster? How can we hand off to QA faster? How can we push changes to Production more quickly? Using the Filter results text box, you can filter by a project (example below) or parameter (e.g., Milestone, Label). Value stream analytics filtering. No login is required to view Value stream analytics for projects where you can become familiar with stream filtering, default stages and deep-dive tables. For a full view of the DORA metrics, you have to log in with your GitLab Ultimate-tier account or sign up for a free […]

Read More

GitLab’s commitment to enhanced application security in the modern DevOps world

With GitLab 14, we saw deep emphasis on modernizing our DevOps capabilities. This modernization enabled enhanced application security and strenghtened collaboration between developers and security professionals. We saw enhancments such as: global rule registry and customization for policy requriements with support for separation of duties a newly developed browser-based Dynamic Application Security Testing (DAST) scanner used to test and secure modern APIs and Single Page Applications more support for different languages using Semgrep new vulnerability management capabilities to increase visibility With the GitLab 15 release, we can see how our commitment to enhancing application security across the board is stronger than ever. In this blog post, I will provide details on how GitLab is commited to enhancing not only security, but efficiency. Discover how GitLab 15 can help your team deliver secure software, while maintaining compliance and automating manual processes. Save the date for our GitLab 15 launch event on June 23rd! GitLab 15 security features We see that with every GitLab release, there are plenty of enhancements to our security tools. GitLab 15 is no exception! We can see a boatload ? of security enhacements released in GitLab 15 below: These features run across different stages of the software development lifecycle. I have created a video showing some of the coolest new security features in GitLab 15: Scanners moved to GitLab Free Tier A lot of our scanners were only part of GitLab Ultimate in the past. However, over time, certain scanners have been moved over to GitLab Free Tier, enabling you to enhance the security of your application no matter what tier of GitLab you are using. Scanner Introduced Moved to Free SAST 10.3 13.3 Container Scanning 10.4 15.0 Secret Detection 11.9 13.3 Within the free tier, you are able to download the reports generated by the security scanners. This allows developers to see what vulnerabilites were detected within their source code and container images. However, there are benefits to upgrading to Ultimate, which are described below. Benefits of upgrading to Ultimate Some organizations have multiple groups and projects they are working on, as well as a the security team, which manages all the detected vulnerabilities. While having security scan reports ready for download is useful, it is not exactly scalable across an organization. This is where Ultimate assists in enhancing DevSecOps efficiency. Scanners While the GitLab Free Tier includes SAST, Secret Detection, and Container Scanning to find vulnerabilities in your source code, when you upgrade to Ultimate, you are provided with even more scanners. Here are some of the additional scanners provided in Ultimate: Developer Lifecycle In Ultimate, there is enhanced functionality within the developer lifecycle. The merge request a developer creates will contain a security widget which displays a summary of the new security scan results. New results are determined by comparing the current findings against existing findings in the default branch. The results contain not only detailed information on the vulnerability and how it affects the system, but also solutions to mitigating or resolving the issue. These vulnerabilities are also actionable, meaning that a comment can be added in order to notify the security team, so they may review – enhancing developer and appsec collaboration. A confidential issue can also be created so that developers and security professionals can work together towards a resolution safely […]

Read More

What You Should Know About This Epic Telegram Sticker Browser

Skia4Delphi is one of the most useful open-source 2D graphics libraries and windows app development tools that can supercharge your FireMonkey and VCL apps’ user interface. One of the notable advantages of using this library is that it gives you the ability to combine or merge different animations with transparency that certainly adds that “wow factor” to your apps. In this video, we will get to know more about the quick and easy process of adding your favorite Lottie Animations and Telegram Stickers using the TelegramStickerBrowser project that is written in Delphi with the Skia4Delphi library. How to Download Stickers from Lottie and Telegram Lottie is a JSON Based Animation format that can easily add high-quality animation to any native app. TSkLottieAnimation is one of Skia4Delphi’s main components that allows users to easily overlay animations on top of each other. We can also recall Ian Barker managed to simulate a Star Trek-inspired data dashboard using these components. Lottie is an iOS, Android, and React Native library that renders After Effects animations in real time, allowing apps to use animations as easily as they use static images. They are also notably small files that work on any device and can scale up or down without pixelation. Aside from Lottie Animations, you can also download Stickers from Telegram, a popular messaging platform. To do so, simply browse the stickers you want using the Sticker Downloader Bot in Telegram. Copy the sticker to the clipboard and paste it into the chat box. This will provide you with a link to download which you can easily load to Delphi via the TelegramStickerBrowser component. How to Use TelegramStickerBrowser TelegramStickerBrowser, as the term clearly suggests, is a desktop browser for Telegram stickers (TGS) and Lottie (JSON) animation files. This uses the Skia4Delphi library that provides you access to a great number of free and high-quality animations. The TelegramStickerBrowser includes a small selection of colorful and HD stickers from LottieFiles but you can also download additional stickers from Telegram via the StickerDownload bot. Mckeeth will also demonstrate how to export these stickers from After Effects. If you want to learn more about this fascinating project that can supercharge your app’s interface, feel free to watch the video below.

Read More

Take our DevOps quiz!

We’re hoping to stump you…and we stumped ourselves on some of these questions for sure. There are just 10 questions, so dive in, and you’ll see your score at the end. “What’s the most obscure programming language? Will AIOps be in your future? Take our DevOps quiz and find out.” – GitLab Click to tweet Sign up for GitLab’s twice-monthly newsletter

Read More

Best Open Source Low Code Platform: Expectations vs. Reality

As a fast and easy alternative to traditional software development, an open source low code platform is fast becoming the hot topic. By 2026, the open source software and services market is expected to reach a whopping $50B. This incredible expansion and growth is thanks largely due to the prediction that 75% of large enterprises are expected to use at least 4 low-code development tools.   Let’s unpack that assertion a little here along with some background information on what open source is and how low code platforms factor into these estimates of accelerated uptake into low code or no code development solutions. What is anopen source low code platform? Since the buzz of low code platforms is everywhere, chances are you are familiar with the word. What exactly is it? Here is the definition  “A low code platform is used to imply visual interfaces with basic logic like drag and drop with little to no coding in a platform” Why is it called an open source low platform? Because the source code is free and easily available to download, deploy and edit by the end user where it deems fit. In simple words, an end user can view, copy, learn, alter or share the code. They are used for scalability in building applications and processes. The platform may vary by requirements so if you want to get started here is the beginner’s guide you must check out! What are the pros of open source low code platform? 1. Flexibility  Flexibility is a crucial quality of open source software that supports this approach to, software deployment. The end-user can have ultimate modification possibilities based on developer resources and requirements without any hassle or fear of the company’s terms and conditions being broken. 2. Reliability  For open-source software, the user relies on the corporation to update, patch, and enhance the codebase, as well as the community. Updates and patches can be maintained and supported by the community. 3. Control and fast transformation Organizations are better positioned to adapt and respond to rapidly changing business situations because to these low-code features so that for their own deployment, the end-user can alter and control the application’s basic code. Companies with the resources to do so can use this to create a product that is truly suited to given requirements. 4. Higher Productivity What used to take months now takes days, if not minutes now, thanks to low-code development. Which allows more apps to be produced in as little time as possible. Time is no longer an impediment to true innovation all because of low-code development. 5. Open Community  The code can be peer-reviewed and maintained by participants in an open source community having passionate programmers and coders. This is indicative of the open source development community’s collaborative and helpful nature such as (Wikipedia, GitHub, etc). 6. Low cost With the ability to build more apps in less time, decreases in cost have been observed. Using low-code tools, the average organization avoids hiring two IT developers. Over the course of three years, the applications designed generated around $4.4 million in enhanced corporate value. – (Forrester) Expectation vs Reality – What is the Gap? Low-code means not having any code As the name suggests, is kind of true. It requires at least a little coding depending on some […]

Read More

Extend TMS WEB Core with JS Libraries with Andrew: Tabulator Part 2: Getting Data Into Tabulator

Last time out, we started our adventure into Tabulator, a JavaScript library that provides a grid-style control that you can use in your TMS WEB Core projects. This time out, we’re going to focus on just one aspect – getting data into Tabulator.  The material in this post may also be of interest generally, no matter the grid control you’re using, or even if you’re just looking to get data into your TMS WEB Core application generally.  We’re going to dip our toe into SPARQL, used to query the data behind Wikipedia.  We’ll also show how to quickly set up an XData server as a proxy, both for SPARQL and other remote data sources.  This should be fun! Motivation. Tabulator, and virtually every other JavaScript-based grid control, is expecting to get data in JSON format, where the various components involved along the way have differing levels of rigidity when it comes to the JSON format itself, unfortunately, as will see. Sometimes other formats are supported, like XML or CSV, but JSON is almost always the primary method. This isn’t much of a surprise, of course, as JavaScript and JSON are very closely related to one another. Many interesting remote sources of data can be accessed via a REST API, and will often also, if you ask politely, return data in a JSON format.  All good.  However, formulating requests acceptable to the remote server, and being able to actually get the bits of data out of the JSON that you get back, can sometimes be adventures all their own. And at the same time, there may be issues in terms of how much data you can request from a remote API or how fast it can serve up the data you are interested in.  You might also be required to use an API key to make requests that you absolutely do not want to include in your TMS WEB Core application (or any JavaScript client application). And if you’re accessing multiple remote data sources, you might be multiplying the potential headaches to come. So in this post, we’re going to contact multiple remote data sources. We’re going to use a private API key. And we’re going to address performance aspects of the data we’re using.  Along the way, we’ll also be looking to explore a broader array of data types (images, for example).  And once we finally get hold of some data, we’ll add it to a Tabulator table.  We’ll also see if we can add a few nice finishing touches along the way, to help balance some of the Tabulator content in other upcoming posts. The example we’re going to develop here is a simple one, at least to visualize.  We want to use a date picker to select a birthday.  With that birthday, we want to see a list of all of the actors (movies and TV shows) that share that birthday.  And if we select an actor, we want to see all the movies and TV shows where they had a role.  Sounds easy enough right?  Well, the challenges here are not with TMS WEB Core or with Tabulator, but rather with the complexities of getting data from remote data sources.  But these are all solvable problems.  Just maybe a little more tenacity is required. Be warned, odd […]

Read More

Behind the scenes of Subway Surfers: A Q&A with SYBO

What have been the biggest challenges with scaling up so fast? Mathias: For a smaller studio to be supporting a game with 100 million players is an extreme task. It’s incomprehensible that there was a young team operating such a massive game. We, along with Unity, grew as the industry grew, so everything, including the parameters, were changing quickly.  Our goal from the beginning was to give gamers the best possible feeling when playing Subway Surfers, and we keep this in mind with every step and decision we make. We took over publishing for the mobile platforms in mid-2020. For the last 18 months, we’ve been speed learning how to self-publish.  We’ve improved the game, but there’s always the potential to continue to grow and optimize. When we took over self-publishing, initially we were deactivated on the ad platform, because although we only started with 10% of our player base, it amounted to 10 million players, and they thought we were hackers.   How about the technical challenges? Murari Vasudevan: On the technical side, for iOS and Android, the scale of the number of devices is enormous. For Android, there are more than 20,000 unique devices. When we took over publishing, we needed to ensure that the game ran smoothly on all devices that we targeted.  We had issues with performance exceptions and crashes, and with the application not responding. We dealt with the scale of each of these issues happening live, as well as the need to diagnose it in real-time. In order to not put off our player base, we needed to look at the Unity Profiler, optimize, and work with the loading time. We needed to ensure that the loading time matched or improved upon the original.  To ensure good gameplay on even the least-powerful smartphones, we devised a number of techniques, including batching level geometry to reduce draw calls, rotating coins using C# scripts running on the CPU rather than shaders running on the GPU to further minimize draw calls, minimizing UI redraws and ensuring optimal timing in level generation, and not generating more content than a user can see. The Unity Profiler and Frame Debugger make it much simpler to monitor exactly how all these techniques are working. We also use Unity’s different quality settings to optimize fps and refresh rates on low-end devices. We spent a lot of time tweaking our bootstrapping funnel and optimizing our code to ensure that we were really efficient in our process. We had to have a lot of analytics in place to see what players were doing, how they were going through the various stages, and where they were dropping off. This was the flow that came with taking over so many devices and players, and it was an ongoing challenge we had to confront right from the start.

Read More

Detecting performance bottlenecks with Unity Frame Timing Manager

The Frame Timing Manager is a feature that provides frame-level time measurements like total frame CPU and GPU times. Compared to the general-purpose Unity Profiler and Profiler API, the Frame Timing Manager is designed for a very specific task, and therefore comes with a much lower performance overhead. The amount of information collected is carefully limited as it highlights only the most important frame stats. One main reason for leveraging the Frame Timing Manager is to investigate performance bottlenecks in deeper detail. This allows you to determine what curbs your application performance: Is it bound by the main thread or render thread on CPU, or is it GPU-bound? Based on your analysis, you can take further action to improve performance. The dynamic resolution feature supports fixing detected bottlenecks on the GPU side. You can then increase or reduce rendering resolution to dynamically control the amount of work on the GPU. During development, you can even visualize timing in an application HUD, which allows you to have a real-time, high-level mini Profiler built right in your application. This way, it’s always readily available to use. Lastly, you can use the Frame Timing Manager for release mode performance reporting. Based on collected information, you can send statistics to your servers regarding your application’s performance on different platforms for better overall decision-making.

Read More