From the blog

Why Do You Need A JS Test Framework?

The JavaScript testing framework enables you to significantly boost your development workflow. It increases your team’s speed and efficiency. Also, it improves test accuracy. Besides, the JS test framework reduces test maintenance costs. It can be a live saver for projects with a low budget range. There are plenty of frameworks available online. But which is the best option? Can it help you efficiently test applications? Do you really need a JS test framework? In this post, you will find all the details. What is a JS test framework? The JS test framework is a tool for examining the functionality of JavaScript web applications. It helps you to ensure whether all the components are working properly or not. It enables you to easily identify bugs. Therefore, you can quickly take take the necessary steps to fix the issues. Why do I need a JS test framework? Greater efficiency:  Manually testing large web applications can take a huge amount of time. You don’t want to get involved in such a tiresome process. By using a JS test framework, you can quickly get the job done by automating the process. It enables you to continuously test and validate newly developed features. It reduces the feedback and testing cycle. As a result, you can deliver the application in significantly less time. Improve Accuracy: It’s quite usual for testers to make mistakes during monotonous manual testing. Therefore, you will not get accurate results. However, you can solve the issue by utilizing a JS test framework. It produces the same result every time you run it. Therefore, it can execute tests with 100-percent accuracy. Reduce Cost: JS test framework helps you to get the job done with fewer resources. Therefore, you don’t have to spend extra money. It enables you to significantly reduce the costs. If you have a tight budget, it can be very helpful for you. Read: Rapidly Build Testing Automation To Supercharge JavaScript App Quality What is the best JS test framework? The best JavaScript test framework is Sencha Test. It offers the most comprehensive solution for examining Ext JS apps. It allows you to create robust unit and end-to-end tests. As a result, you can deliver quality apps. Apart from Ext JS, it supports end-to-end testing of React apps. Overall, it is a versatile JavaScript testing framework. Why Sencha Test is the best solution for testing JavaScript apps? Enables you to auto-grab events while interacting with the app under test Provides comprehensive test results from automated and manual test runs Supports Istanbul integration for fixing code coverage gaps Allows you to execute code in any browser, local machine, or browser farm Enables you to use external Node modules to expand the scope of testing Does it support automated test runs? Sencha Test comes with a Command Line Interface (CLI). It helps you achieve the full power of automated test runs. Once tests are authored and checked into the source control repository, you can launch them from your Continuous Integration (CI) system. The CI system can automatically invoke the CLI once it detects a change to the application code or the test files. Does it allow me to schedule automated test runs? Sencha Test features Test Runner. You can use it to run selected unit and functional tests on any browser on […]

Read More

Welcome, ironSource!

With ironSource, Unity will take the linear process of making games and RT3D content and experiences and make it an interconnected and interactive one – creating the opportunity to innovate and improve at every step of the cycle.  What if that process was no longer “first create; then monetize?”  What if creators had an engine for live games that by default enabled them to gain early indicators of success for their games through user acquisition of their prototype, and gave them a feedback loop to improve their games based on real player interactions as early in the process as possible? Unity and ironSource’s combined offerings will uniquely position the combined company as the only game creation and growth platform for creators. This tighter integration between Unity’s Create and Operate means a more powerful flywheel and data feedback loop that further supports creators’ success and understanding of what’s working between gameplay, design and their monetization efforts. With the addition of SuperSonic, ironSource’s publishing solution, the combined company will also break down the barriers to publishing directly through the engine. Unity provides the engine that takes ideas from inception to being viable businesses. This is an engine that negates the need for developers and the industry at large to reinvent the wheel each time a game is created. The gears of this include the tools to create, publish, run, monetize, and grow. It is this integration with ironSource, and the resulting interactivity and interoperability across the game lifecycle, that makes Unity + ironSource unique. It is our combined knowledge and passion for game developers that keeps us innovating to meet their needs today and in the future.  We succeed only if creators succeed and with ironSource joining the Unity family, we will make it easier for them to do so in one centralized, integrated platform.  Join the discussion on the Unity Forums. — Cautionary Statement Regarding Forward-Looking Statements This communication includes forward-looking statements. These forward-looking statements generally can be identified by phrases such as “will,” “expects,” “anticipates,” “foresees,” “forecasts,” “estimates” or other words or phrases of similar import. These statements are based on current expectations, estimates and projections about the industry and markets in which Unity Software Inc. (“Unity”) and ironSource Ltd. (“ironSource”) operate and management’s beliefs and assumptions as to the timing and outcome of future events, including the transactions described in this communication. While Unity’s and ironSource’s management believe the assumptions underlying the forward-looking statements are reasonable, such information is necessarily subject to uncertainties and may involve certain risks, many of which are difficult to predict and are beyond management’s control. These risks and uncertainties include, but are not limited to the expected timing and likelihood of completion of the proposed transaction, including the timing, receipt and terms and conditions of any required governmental and regulatory approvals of the proposed transaction;  the occurrence of any event, change or other circumstances that could give rise to the termination of the merger agreement; the outcome of any legal proceedings that may be instituted against the parties and others following announcement of the merger agreement; the inability to consummate the transaction due to the failure to obtain the requisite stockholder approvals or the failure to satisfy other conditions to completion of the transaction; risks that the proposed transaction disrupts current plans and operations […]

Read More

How To Build The Best Cross Platform Apps In 2022

In this fast-paced tech world, no business would like to miss the chance to have their own mobile app in the market of Google play store or App store. To make the most of it, it is essential to create an app that runs on as many operating systems as possible. Building a cross platform app can play a key role in this. In this article we’ll explain what we mean by that statement. Why choose to develop cross platform apps at all? If you choose to go down the route of developing separate apps for each of the platforms you want to target, for example Windows, macOS, iOS and Android, this separate app philosophy can attract a number of significant disadvantages which can outweigh any potential benefits. Apart from the version problems of having several different code bases and deployable packages, there are other intangible things like budgeting for potentially different development environments and suppliers. Using a single environment and IDE such as Embarcadero’s RAD Studio which can produce cross platform apps to target all your desired platforms in one place and yet create native apps focuses your development efforts, simplifies your resource requirements and budgeting into one quantifiable number. This is why cross platform app development has become the undisputed option for businesses seeking a presence on both Android and iOS platforms. Cross platform apps are popular because they reduce – in fact completely mitigates – the need to create a separate codebase to make an app for all the different platforms. A cross-platform app can run on a variety of devices and platforms. In this article, I’ll give you a run down on native vs cross platforms, the benefits of building a cross platform app, how to build a cross platform app and we’ll discuss the best cross platform app builder. Let’s start! What is the difference between single platform and cross platform apps for mobile? The debate on choosing between individual, vertical, single platform – where there is a different app for each target – and cross platform development – where there is one code base which can be compiled to produce apps and packages for all the desired targets – has been significant, often dividing opinion in the tech world. Some believe that creating separate apps for each individual platform is superior to a cross platform apps approach. Companies like Uber, on the other hand, are rewriting their driver app using a cross platform app framework. Cross platform development has become increasingly common in recent years. To be clear; cross platform app development is the process of designing mobile (and in some cases desktop) apps that can run on a variety of systems. Developers choose this type of development because it requires only one programming effort and the app can run on Android, iOS, or Windows. In the case of RAD Studio, cross platform apps written using its FireMonkey FMX framework can run Windows, macOS, Android, iOS and, with FMXLinux, Linux computers too. Compare this to single target vertical mobile apps. These kinds of apps are usually only able to operate on one vendor’s platform and, in the case of Apple, for example, also require you to use their own IDE which only works on their hardware. The work done for on such a vertical or narrow […]

Read More

How to migrate Atlassian’s Bamboo server’s CI/CD infrastructure to GitLab CI, part two

In part one of our series, I showed you how to migrate from Atlassian’s Bamboo Server to GitLab CI/CD. In this blog post we’re going to take a deep dive into how it works from a user’s perspective. Get started You’ve deployed the demo so it’s time to play with it to understand how it works. Let’s imagine that one of the members of our project is John Doe. He is a software engineer responsible for developing some components (app1, app2, and app3) of the entire product, and he and his team would like to test those components in several combinations in myriad preview environments. So, what does that look like? First of all, let’s make some commits to the app1, app2, and app3 source code and get successful builds upon those commits. After that, we should create releases for those apps to be able to deploy them (as the deployment part of the apps CI config only shows when being triggered by a Git tag, i.e., a GitLab release). A release can be created by launching the last step (manual-create-release) in a commit pipeline. That would give us a new release with the ugly name containing the date and commit SHA in the patch part (in accord to semver scheme): On the Tags tab for the same app you now can see a deployment part of the pipeline has been triggered by the just created GitLab release but no actual environments to deploy are displayed (the _ item in the Deploy-nonprod stage is not an env): Create an environment But before that we have to briefly switch to another team who is responsible for preparing infrastructure IaC templates. Navigate to the infra/environment-blueprints project and pretend you are a member of that team doing their job. Namely, imagine you have just created some initial set of IaC files (they are already kindly prepared by me and present in the repository). You’ve tested them and now you feel that they are ready to be used by the other members of the project. You indicate such a readiness of a particular version of the IaC files by giving it a GitTag. Let’s put a tag like v1.0.0 onto the HEAD version. You will see how the tags are going to be used immediately. But first let’s make some changes to the IaC files (e.g., add a new resource for some of the apps) and create a second Git tag, let’s say v1.1.0. So, at this moment we have two versions of IaC templates (or blueprints) for our infrastructure – v1.0.0 and v1.1.0. Deploy an app into the environment Now we can return back to John and his team. We assume John is somehow informed that the version of the IaC templates he should use is v1.0.0. He wants to create a new preview environment out of the IaC templates of that version and put app1 and app2 into that env. (Here starts a description of how a user interoperates with the infrastructure-set Git repo. Notice that though the eventual idea is that it should be a Merge Request workflow – where you first get a Terraform plan within a Merge Request and can apply such a plan by merging the MR – which is widely advocated by GitLab but for the sake […]

Read More

5 Tips for managing monorepos in GitLab

GitLab was founded 10 years ago on Git because it is the market leading version control system. As Marc Andressen pointed out in 2011, we see teams and code bases expanding at incredible rates, testing the limits of Git. Organizations are experiencing significant slowdowns in performance and added administration complexity working on enormous repositories or monolithic repositories.  Why do organizations develop on monorepos?  Great question. While some might believe that monorepos are a no-no, there are valid reasons why companies, including  Google or GitLab (that’s right! We operate a monolithic repository), choose to do so. The main benefits are:  Monorepos can reduce silos between teams, streamlining collaboration on design, development, and operation of different services because everything is within the same repository. Monorepos help organizations standardize on tooling and processes. If a company is pursuing a DevOps transformation, a monorepo can help accelerate change management when it comes to new workflows or the rollout of new tools. Monorepos simplify dependency management because all packages can be updated in a single commit. Monorepos offer unified CI/CD and build processes. Having all services in a single repository means that you can set up one system of pipelines for everyone. While we still have a ways to go before monorepos or monolithic repositories are as easy to manage as multi-repos in GitLab, we put together five tips and tricks to maintain velocity while developing on a monorepo in GitLab. 1. Use CODEOWNERS to streamline merge request approvals CODEOWNERS files live in the repository and assign an owner to a portion of the code, making it super efficient to process changes. Investing time in setting up a robust CODEOWNERS file that you can then use to automate merge request approvals from required people will save time down the road for developers.  You can then set your merge requests so they must be approved by Code Owners before merge. CODEOWNERS specified for the changed files in the merge request will be automatically notified. 2. Improve git operation performance with Git LFS A universal truth of git is that managing large files is challenging. If you work in the gaming industry, I am sure you’ve been through the annoying process of trying to remove a binary file from the repository history after a well-meaning coworker committed it. This is where Git LFS comes in. Git LFS keeps all the big files in a different location so that they do not exponentially increase the size of a repository. The GitLab server communicates with the Git LFS client over HTTPS. You can enable Git LFS for a project by toggling it in project settings. All files in Git LFS can be tracked in the GitLab interface. GitLab indicates what files are stored there with the LFS icon. 3. Reduce download time with partial clone operations Partial clone is a performance optimization that allows Git to function without having a complete copy of the repository. The goal of this work is to allow Git to better handle extremely large repositories. As we just talked about, storing large binary files in Git is normally discouraged, because every large file added is downloaded by everyone who clones or fetches changes thereafter. These downloads are slow and problematic, especially when working from a slow or unreliable internet connection. Using partial clone with a […]

Read More

Top 5 Tools In My Web App Development Toolkit

There is no denying that web and mobile apps are the part and parcel of all businesses, whether small or large. Organizations rely on these apps to reach out to their customers, automate their business processes, keep track of sales, monitor employee performance, and even make future strategic decisions. Naturally, this makes web app development a critical process for any software engineer. In the context of web application development, it is important to know about the best Javascript tools to speed up the entire process of app creation. These tools are now a key to accelerating development and reducing the time-to-market. Developers can now spend more time on meeting the requirements of their clients and less time writing boilerplate code. When it comes to tools for Javascript app development, Sencha Ext JS is not just a complete framework for web and mobile app building. It also includes the most important tools required in all stages of app creation. Whether you are designing the app, creating its user interface, writing its business logic, designing its database schema, or packaging it for the end-user, Sencha Ext JS has an automated solution for you. Here is a list of the top 5 must-have tools, which are indispensable for any developer. Continue reading to discover why these tools are important and why you should include them in your app development toolkit. 1. What is Sencha Architect in Web App Development? Sencha Architect is an app builder with a visual interface. You can use Sencha Architect to develop cross-platform HTML 5 web applications for the desktop. As Ext JS is also a complete Javascript mobile app framework, you can use Sencha Architect while building mobile apps. One of the most awesome features of Sencha Architect is that it empowers you to build your apps using drag and drop features. This way you don’t have to spend much time on manual coding. Much of the boilerplate code is generated for you. Moreover, the generated code is optimized for high performance. The automatic code generation feature is available for desktop as well as mobile apps. Configurations and properties of many Ext JS UI components can be altered using a visual WYSIWYG window. This eliminates the chances of code errors that developers often make and helps speed up the overall development time. 2. What is Sencha Themer? While the default styles and themes provided by Ext JS are awesome, they may not be compatible with your organization’s color schemes. Sencha provides an easy solution to customizing all styles, color schemes, and themes with Sencha Themer. Sencha Themer, like Sencha Architect, is also a visual tool with a graphical interface. It enables you to style Ext JS apps without writing a single piece of code. Additionally, you can create custom themes and reuse them in other apps. Sencha Themer also includes a smart color palette, which enables you to apply different color combinations to different component states. The color palette with its progressively lighter and darker color shades makes it very easy to choose the base, body background, and font colors. What’s more, Sencha Themer has a font management option. You can quickly select the fonts of your choice in your app. Additionally, you can add web fonts from Google fonts. 3. How Does Sencha Cmd Accelerate Web App […]

Read More

SerializeReference improvements in Unity 2021 LTS

SerializeReference includes support for polymorphism, which means that a field can be assigned to an instance of a class that derives from the field type. In fact, we support the field type as “System.Object” which is the root base class of every C# class. But this opens up the possibility of a successfully compiled project missing the definitions of classes that were previously available and had been saved into a scene or asset file. In some cases, classes can go missing when source files are removed, classes are renamed, or classes are moved to another assembly. When loading a SerializedReference host object, the fully qualified type name of each managed reference object is examined and must be resolved back to a valid class type in order to instantiate it. In previous versions of Unity, missing classes could put the entire “host” object into an error state without loading any of the valid managed reference objects. So if you had a “host” with an array of 15 managed reference objects, but a single object could not be resolved, then you would not see any of them in the Inspector. There would be an error logged in the console – even though the host object was not visually marked as being in an error state when inspected – and all the edits made would be silently discarded. In Unity 2021, we now instantiate all loadable managed reference objects and replace the missing ones by null. This gives users an opportunity to see more of the state of the host object, and to facilitate the resolution of missing types. Meanwhile, if the missing type is restored while the host object is loaded, then the triggered Domain Reload will restore the managed reference objects, and all fields referencing it will be updated properly. This is an example of how objects with missing types appear in the Inspector: In 2020.3, the Fruit class is missing yet the Inspector does not show any array elements, and there is no indication of an error: In 2021.3, the Inspector warns you that the missing Fruit objects appear as null entries, whereas the Sandwich objects continue to be displayed: The error messages in the console related to missing types have also been updated so that they’re less repetitive – they simply identify which host objects have missing types. Here’s an error message in 2020.3: Compare it to this warning message in 2021.3:

Read More

Revisit your marketing strategy: B2B buyer behavior is changing

As a B2B marketer, you know that the sales rep relationship remains critical to closing deals in the B2B world. But how can you support that relationship-building process in a world where hybrid events are more common? Let’s take a step back and appreciate just how much B2B event marketing has changed.  By necessity during the pandemic, events made a rapid transition from in-person to virtual, and that served the purpose of maintaining a connection with buyers. The rapid rise of video conferencing technology allowed marketers to continue delivering events remotely, albeit in a more restricted way than pre-pandemic.  But events are evolving, resisting the call to return to previous in-person-only formats and instead adopting a more refined hybrid approach. The key benefit of this approach is being able to reach a wide audience, regardless of geography, timezone and other factors. What sets hybrid events apart from remote events is embracing the tools available to enhance the event experience with real-time 3D interactive applications. These can be experienced by both in-person and remote attendees. Maintaining a focus on the all-important B2B buyer journey, marketers can provide buyers in a hybrid event the opportunity to self-educate on products in a way that moves beyond conventional marketing methods such as leaflets and videos. With real-time 3D technology, it’s possible to provide custom-selected supporting information to the buyer, for example, including a personalized sales presentation in a follow-up email that displays the exact product configuration they showed an interest in, along with a quote. Simultaneously, your sales team is equipped with that same data about the preferences of each buyer, placing them in an ideal position to discuss individual needs in the follow-up sales call. Wondering what a virtual showroom really looks like? Check out the story behind one of the world’s top manufacturers for woodworking machinery, who worked with their agency partner to create a fully interactive showroom that not only caters to remote visitors, but also supports the buyer self-serve journey by providing additional product information within the experience. Do your events engage everyone, including in-person and remote attendees? Do they give you the opportunity to gather meaningful insight into what your customers really want?

Read More

Change the look & feel of your chart empowered application with a few clicks!

TMS FNC Chart had a lot of customization options in terms of appearance and overall look & feel, and with a lot of property settings the result was achievable yet time consuming. In v2.0 we went the extra mile and bundled all your highly appreciated feedback into a very exciting new feature: “Global Appearance”. “Global Appearance” is a feature that allows you to customize your component with only a few lines of code, or a few clicks at designtime. Changing the overall look & feel of the chart can now be done in a couple of seconds! Global appearance is a bundle of the following settings and customization options: Monochrome color scheme Excel color scheme Custom/Extendable color list Global font color, name & size Various descendant class types for nicer finetuning To find out more about “Global Appearance”, I invite you to look at this video below. Feedback is important! The above result is based on feedback, your feedback. Sending us suggestions for new features, shortcomings & ideas to improve our products overall are always appreciated. We analyze the feedback and see what is feasible and in which time frame. Then we go ahead and assign a team of experienced developers to the task, which always results in the best possible implementation. Have some ideas? don’t hesitate to contact us and/or leave a comment below.

Read More

Discover The Power And Capabilities Of The FireDAC Data Access Library

FireDAC is a powerful multi-device data access library with a unique set of Universal Data Access Components for developing multi-device database applications for Delphi and C++Builder. It is interesting to note that FireDAC is not just limited to windows application development as it offers complete cross-platform support for RAD Studio. This means that FireDAC can be used on all different platforms including 32 and 64bit Windows, macOS, iOS, Android, and Linux. In this video, Cary Jensen of Jensen Data Systems will walk us through the FireDAC library and why it quickly became the preferred data access mechanism in RAD Studio. What are the Benefits of Using FireDAC? The webinar will provide us with an overview of FireDAC and how it relates to other RAD Studio data access mechanisms. Cary will discuss all the notable basic and advanced features of FireDAC and will guide us through the process of migrating applications to FireDAC from an existing data access mechanism. RAD Studio includes a number of Data Access Mechanisms including dbGo which is used for ActiveX data components, IBExpress for Interbase, the Borland database engine or BDE, dbExpress, and even the myDB which is based on the client data set. Like all the aforementioned frameworks, FireDAC represents the TDataSet interface. Specifically, there is a large number of capabilities introduced into the Tdataset class and those capabilities are found in every one of the data access mechanisms in RAD studio. Interestingly, everything you can do with the BDE is supported by FireDAC. So if you are familiar with using both the BDE and the TdataSet, then you already know how to use this framework. What makes FireDAC so powerful? FireDAC, however, is not just simply similar to BDE. For instance, FireDAC is notably SQL-friendly. It is internally structured a lot like those you find in frameworks like the .NET data set. FireDAC goes beyond the dataset because it implements many other advanced features. It implements aggregates, group states, filtered navigation, and more. To top it all off, FireDAC has client-side indexes. The webinar will also discuss FireDAC’s power and practicality and why it became a top choice among other data access mechanisms. Aside from its complete cross-platform support, FireDAC also has exceptional support for databases. It supports most of the major commercial and open-source databases. Cary will also show a number of demonstrations including the process of migrating applications using ODBC and Side-by-Side Deployment. To learn more about this powerful data access mechanism, feel free to watch the webinar below. You can download a free trial of RAD Studio Delphi today and try the power of FireDAC for yourself.

Read More