When I was really getting my feet wet with 3D modeling, games like Rime and Breath of the Wild always managed to captivate my interest. Their abstracted art styles and use of color inspired a sense of adventure and exploration which really resonated with me. Non-photorealistic game art still sits at the core of what I strive for. More recently, Ubisoft’s open world games have set the bar for me in terms of scale and worldbuilding workflows. I can only imagine the amount of research and development that must go into vegetation alone. Over time, I started learning more about the natural world through reference materials, which drove me to see those places in person. This provided me ample opportunities to capture source materials and generally soak up inspiration. In a sense, observing and simplifying the intricacies of nature and translating them to a video game context sits at the heart of my job!
The row of spheres in the image above was created by positioning a sequence of Rigidbodies in a row and constraining their X and Z Freeze Position properties. An upward force was then applied using an Animation Curve based on the compression upward force, but with different curves for each sphere, positioned side-by-side for better visualization. You can use this technique to find the desired level of bounce for an object, or to tweak existing bounce to balance out characteristics. As a designer, being able to manipulate the characteristics of the upward force can help you create abstractions of more complex functions. Curves are a powerful XY chart data type, and though not technically perfect, they can help you prototype speedy damping solutions that can be visually edited in the Inspector and saved as presets at runtime. In this blog on the art of damping, Alexis Bacot highlights all the things that “depend on good damping. Camera, animation, movement, color gradients, UI transitions, and many many more… it’s used everywhere! Understanding damping is key to achieving great polish. Damping alone can make the difference between a bad or good experience.” In the same post, he demonstrates how Unity’s SmoothDamp can be used to create a beautiful ease in and out, and reacts to the target changing accurately. But it does not bounce like an “advanced spring damper that can oscillate, which is great for car suspension or fake ball physics” – an example of where Animation Curves provide a powerful advantage. Of course, curves have more uses than as an XY data type to manipulate gameplay. They can also be treated as an evaluation tool to capture data visually using AddKey via the Unity API. For evaluating a position over time, such as damping in the vehicle suspension example, or the falling spheres, use AddKey(elapsedTime, currentSpringCompression) in a method, and then call that method and pass captureResolution as the repeating rate via InvokeRepeating. A capture resolution of 0.1f means that, at every 0.1s, a key is added to the curve. View the mini result in the Inspector, or open the graph up to see the complete data.
CE: What is the original story behind Kidoz? JW: Kidoz first started as a developer of apps and other software specifically for children under 13. Through that experience, we identified one of the biggest challenges for advertisers and publishers is reach and monetization within young audiences. For example, we saw regulatory policies from Google and Apple tightening and becoming more stringent over time as they increased their privacy and security. At this time, there was an opportunity to launch a proprietary ad network using our own custom built SDK and partner with app developers whose target audience is children. CE: How do you work with publishers and advertisers? JW: We only use contextual targeting to identify unique segments within the thousands of apps we reach using the Kidoz SDK and through our direct supply partners. We execute special campaign plans for our advertising partners to whom compliance with COPPA, GDPR, Google and Apple is business and brand critical. This commitment to compliance and performance has made Kidoz the number one kid-safe mobile network. Kidoz has a large network of sales and agency partners that represent the Kidoz inventory globally and just last year in 2021 we activated deals in 58 different countries. CE: What kind of brands does Kidoz work with? JW: We work directly or through our sales partners with most of the global brands that advertise to children and families. These brands prioritize compliant child-safe media for their advertising and they know Kidoz can help them reach their audience at scale. In the toy industry, some of the brands we work with include Lego, Mattel, Hasbro, Playmobil, MGA, and Spinmaster. In entertainment, we work with Disney, Netflix, Universal, Paramount, Warner, Sony, Amazon Prime, Nickelodeon, and many more. These brands are global leaders and while they are the most active when it comes to child-directed media, they are also deeply committed to advertising compliance and therefore the Kidoz solution and our network partners must be completely compliant in technology and operations. CE: What should publishers and advertisers be thinking about to stay compliant while increasing their revenue? JW: First and foremost, advertisers and publishers need to ensure they’re working with COPPA and GDPR compliant partners that stay aligned with the latest policies and keep their technology up to date. There’s no room for mistakes when it comes to advertising to child audiences as the penalties can be large for those that are noncompliant. As children are a significant percentage of all app traffic, there’s a growing trend amongst developers to operate apps with an age gate that segments users into COPPA, which is currently under 13 years of age, and also non-COPPA users that are 13 and above. With age gating, publishers can operate two separate monetization technologies which allows for the distinct treatment of both user segments. This approach can facilitate compliant monetization of both user segments. CE: How do you see child advertising and COPPA evolving in the next decade? JW: The most important aspect of evolution over the last few years has been the tighter enforcement by the Federal Trade Commission (FTC) and the issuance of penalties for infringements. This has forced game publishers, brands, and the platforms themselves to adopt and enforce COPPA and GDPR compliance. Many other countries […]
Every day something close to a billion people worldwide use different Windows apps for one reason or another and knowing how to build a Windows app is a valuable skill. Think, for instance, how modern graphic designers use apps for photo editing, illustrations, designing vector graphics and a host of other uses. It’s almost impossible to imagine the millions of businesses around the world not using the many different apps and devices for collaboration, task management, communication, and more. Despite strong competition from Apple and the constant pressure from the Linux community, Windows still thoroughly dominates the desktop. Hence, there is a huge demand for windows app development. Once the preserve of a select few ‘computer geeks’, programming is now much more accessible with software design skills being taught widely to students from a broad range of backgrounds. This greater competition means developers need to find ways to build functional apps faster. But, how to build a windows app quickly with less coding? Using the best native app builder software! If you’re looking for the best native app builder, RAD Studio is what you need. RAD Studio is a powerful IDE (Integrated Development Environment) that offers all the features you need for quick app development. In this article, we’ll discuss how to build a windows app efficiently and quickly. We’ll also show you why RAD Studio is the best apps builder software. What is app builder software and how can we use it to build a Windows app? App builder software provides an easy and efficient way to build apps. The purpose of these tools is to simplify the app development process and enhance the productivity of developers. Different developers use different types of app builders, depending on the development approach. For instance, usually, people with little knowledge of coding use no-code app builders or low-code app development platforms. These platforms come with a drag-and-drop interface, allowing users to drag an icon onto the screen quickly. However, if you want a complete development platform that gives you a comprehensive set of tools required for writing and testing code efficiently, IDEs are best for you. IDEs typically consist of a code editor or text editor, a compiler, and a debugger. The code editor is where developers write and edit the source code. The compiler then translates this code into another programming language that a computer can understand. The debugger is helpful in testing the software. Some efficient IDEs like RAD Studio also offer additional features, such as auto code completion, allowing developers to find references to other resources, comment on lines of code, and many more. Why should you use application builder software? If you’re a developer wondering how to build a windows app faster, an efficient app builder is what you need. App builder software for PC offers several benefits: Speeds up time to market App builders and IDEs help businesses bring their products to market faster. The efficient GUI makes it easy to create an application quickly. Collaboration Efficient app builders like RAD Studio offer collaboration features allowing you to work with your team members on an app. Better Code Quality Developers and engineers test codes to eliminate defects and improve software quality. With a powerful app builder like RAD Studio, you can test your code at any time […]
Motivation If we’ve got an excellent web application development tool, why do we need desktop applications at all? It’s a good question. And the answer, quite often, is that we don’t. This is what has made the web such an important platform, after all. And also what makes products like Chrome notebooks as popular as they are. You can accomplish quite a lot using just web applications, and for many people, it is now possible to get by exclusively with web applications alone. Products like TMS WEB Core are making this easier all the time by being able to develop substantially better applications with access to more data and more systems than ever before. And without question, web technologies have come quite a long way in a relatively short period of time. However, there are still many situations that come up where the browser sandbox is too restrictive to accomplish certain tasks. And I’m sure we’re all well aware that the browser sandbox and its rules are in place for very good reasons. So while web technologies will continue to evolve, there are some places it just isn’t going to be going anytime soon, and that’s where desktop apps come in. Here are a few examples where a desktop application solution might win out over a web application solution. An application needs access to directly read and write to a local filesystem. Application data is not permitted to leave a location (IE, saving to ‘the cloud’ poses unacceptable security risks). An application needs access to hardware that does not have an equivalent (or performant enough) web interface. Access to the application needs to be more strictly controlled. Application changes need to be more strictly controlled (i.e., SOX, ISO9000-type stuff). Users work in an environment where web browsers aren’t well-supported. Users work in an environment where desktops are strictly locked down in terms of application access. An application needs access to OS-level functionality that a browser does not have access to. Desire to standardize the application interface (avoiding non-standard browser ‘chrome’ or plugins that might interfere). Granted, there are many kinds of complexity at work here, and no doubt there are just as many ways to address certain problems. For example, a web application could be served up within a local network that has no external access at all, ensuring that data doesn’t leave that location. And there’s even an evolving web standard for accessing serial ports. Crazy, really. But in any event, there may be solid reasons for having a desktop version of a web application (or even a mobile version beyond what can be accomplished with a PWA-compliant application). Whatever the rationale, Miletus is here to help address it. Getting Started For our example application, we’re just going to carry on with the Leaflet example that we covered a couple of days ago, an almost minimalist interactive mapping web application. It has been updated slightly to fix a few minor internal errors, to be more ‘responsive’ and resize to fit its container, and also to have a slightly improved geofence data entry interface, where you can cancel the entry, see the points as a polyline while you’re entering them, change the color of the geofence created, as well as delete geofences. To get this all working, we start by creating a new TMS […]
To monitor the health of GitLab.com we use multiple SLIs for each service. We then page the on-call when one of these SLIs is not meeting our internal SLOs and burning through the error budget with the hopes of fixing the problem before too many of our users even notice. All of our services SLIs and SLOs are defined using jsonnet in what we call the metrics-catalog where we specify a service and its SLIs/SLOs. For example, the web-pages service has an apdex SLO of 99.5% and multiple SLIs such as loadbalancer, go server, and time to write HTTP headers. Having these in code we can automatically generate Prometheus recording rules and alerting rules following multiple burn rate alerts. Every time we start burning through our 30-day error budget for an SLI too fast we page the SRE on-call to investigate and solve the problem. This setup has been working well for us for over two years now, but one big pain point remained when there was a service-wide degradation. The SRE on-call was getting paged for every SLI associated with a service or its downstream dependencies, meaning they can get up to 10 pages per service since the service has 3-5 SLIs on average and we also have regional and canary SLIs. This gets very distracting, it’s stress-inducing, and it also doesn’t let the on-call focus on solving the problem but just acknowledges pages. For example below we can see the on-call getting paged 11 times in 5 minutes for the same service. What is even worse is when we have a site-wide outage, where the on-call can end up getting 50+ pages because all services are in a degraded state. It was a big problem for the quality of life for the on-call and we needed to fix this. We started doing some research on how to best solve this problem and opened an issue to document all possible solutions. After some time we decided to go with grouping alerts by service and introducing service dependencies for alerting/paging. Group alerts by service The smallest and most effective iteration was to group the alerts by the service. Taking the previous example where the web-pages service paged the on-call 11 times, it should have only paged the on-call once, and shown which SLIs were affected. We use Alertmanager for all our alerting logic, and this already had a feature called grouping so we could group alerts by labels. This is what an alert looks like in our Prometheus setup: ALERTS{aggregation=”regional_component”, alert_class=”slo_violation”, alert_type=”symptom”, alertname=”WebPagesServiceServerApdexSLOViolationRegional”, alertstate=”firing”, component=”server”, env=”gprd”, environment=”gprd”, feature_category=”pages”, monitor=”global”, pager=”pagerduty”, region=”us-east1-d”, rules_domain=”general”, severity=”s2″, sli_type=”apdex”, slo_alert=”yes”, stage=”main”, tier=”sv”, type=”web-pages”, user_impacting=”yes”, window=”1h”} All alerts have the type label attached to them to specify which service they belong to. We can use this label and the env label to group all the production alerts that are firing for the web-pages service. We also had to update our Pagerduty and Slack templates to show the right information. Before we only showed the alert title and description but this had to change since we are now alerting by service rather than by 1 specific SLO. You can see the changes at runbooks!4684. This was already a big win! The on-call now gets a page saying “service web-pages” and then the list of SLIs that are […]
One of the core jobs of product managers is to speak with users to better understand their needs, pain points and the context in which they operate and use our products. But not all user calls are the same. There are 3 prominent types of user calls: Discovery or problem validation calls Roadmap discussions Solution validation calls Here’s an in-depth look at how we approach the three types of user calls at GitLab. Discovery calls Discovery or problem validation calls are product managers’ most crucial conversations with users. Discovery calls are typically set up to learn about our users in a targeted way. These calls help build a better understanding of users’ pain points. For discovery, we need a recipe for repeatable, comparable user calls. For this reason, we should create an interview script and follow that script on all the user calls. This does not mean these calls are robotic and devoid of improvisation, not at all! The script should provide the backbone of the discussions. We can adjust it either during the call or in advance based on prior knowledge about the user. Good discovery calls typically take the form of a deep-dive conversation: we know the script by heart and can run back and forth around it, always asking the questions that fit the conversation. Finding the right users is one of the most challenging parts of discovery calls. Thankfully, with GitLab, this is relatively easy. We can always reach out to the most active users on issues and invite them to a call. Another technique I employ is to find users in the Cloud Native Computing Foundation and Kubernetes communities’ Slack channels and articles on Medium. This way, I can also find non-GitLab users, a set of people likely more valuable to interview than existing users. Finally, we can recruit users with the support of the account managers. They are always helpful in connecting PMs with users. Asking the users about their needs shows them that we genuinely care about them. There are at least two distinct discovery calls: PM-led or UX-led. UX research typically works on projects with a strict scope. For PM-driven calls, a great framework is “Continuous discovery” calls by Teresa Torres. With continuous discovery, we build a deep understanding of our users and get well-understood opportunities. The technique allows us to get a broad view and to dive deep into specific aspects of our problem space when needed. Roadmap discussions Roadmap discussion calls are typically initiated by sales or account management teams. Product managers are asked to join the prospect/customer call to strengthen our positions and show how much we care for the customer. To prepare for roadmap discussions, PMs should have an effective way to present the roadmap. This typically happens in the form of slides. A diligent PM might even prepare something specifically for the client. During these calls, the user/customer/prospect will typically ask the questions, and the PMs respond. Our role in these calls is to represent the truth. We might be tempted to paint a rosier picture about the current or expected state of the product than is actually true, and we should avoid making time-bound promises. What are the expected outcomes of roadmap discussions? They can help strengthen our position with the user. Remember that these calls […]
No one ever said hiring was easy. As a matter of fact, talent hiring and retention are some of the hardest aspects to get right for any software company. According to a recent article at Developer Pitstop the average engineer will only stay at a job for an average of two years before moving on, and this tenure is shrinking as time goes on. When we look at the average timeline for engineers in a new role we usually see something like: Learning and adaptation (3 / 6 months): Coming to grips with the new company, team, and their processes. Creating value for the organization (6 / 12 months): Adding value to the business by becoming a functioning member of the team. Becoming a role expert (6 / 18 months): Owning the role completely and helping to shape the direction of the team. At GitLab we pride ourselves on an outstanding onboarding process to reduce the amount of time an engineer will spend in the learning and adaptation bracket and accelerate their evolution into the creating value for the organization bracket. We do this for two main reasons: Quicker integration: We aim to have engineers ship production code in less than one week, and fully onboard them in less than three months. Reduce turnover: Engineers who have an awesome onboarding experience tend to stay with the same company longer. The bottom line is that with these benefits, investing in an amazing onboarding process gives you the highest ROI on your hiring initiatives. So, now that we know why we need to ensure we onboard quickly and correctly, let’s talk about how we do it at GitLab. Overview 💯 Before day one 💥 It’s all about the onboarding issue 🥂 Pick the right onboarding buddy 👌 Pair, pair, and more pairing 🖐 All the coffee chats 🤘 Tailor the experience to the role 🚢 Ship some code in a week or less 💬 Let’s get (and give) some feedback 💯 Before day one The best onboarding processes start as soon as the candidate has officially accepted the offer. This is done in a few ways: An onboarding issue is created with tasks for the hiring manager, their buddy, and People Experience (HR). The hiring manager selects the right onboarding buddy for the engineer and communicates expectations (more on this later). The engineer’s accounts (Email, GitLab account, Okta, etc) are created and their hardware is shipped. GitLab reaches out via email to let the candidate know what the onboarding process looks like. The hiring manager reaches out to the engineer via email to set up a coffee chat on Day 1 as the initial process might seem overwhelming. For us, the most important aspect is communication with the engineer to ensure they are set up for success. We provide them with access to their onboarding issue, helpful video guides for getting started, and a primer on how to navigate our handbook like a pro. The reason this is so important is that we know if we stop communicating with the engineer after signing, we are at risk of creating uncertainty, introducing inefficiency, or even losing them to another offer during that time. 💥 It’s all about the onboarding issue At GitLab, our onboarding issue is the most effective tool we have for […]
Did you know that it’s simple to use some truly excellent Python libraries to boost your Delphi app development on Windows? These libraries are very easy to use and produce wonderful ways to produce graphs and network visualization. Adding Python to your Delphi code toolbox can enhance your app development by bringing in new capabilities that allow you to provide innovative and powerful solutions to your app’s users, combining the best of Python with the supreme low-code and unparalleled power of native Windows development that Delphi provides. Are you looking for how to build a GUI for graph and network visualization library? You can create, manipulate, and study the structure, dynamics, and functions of complex networks with NetworkX on Delphi. This article will demonstrate how to create a Delphi GUI app dedicated to the NetworkX library. Watch this video by Jim McKeeth for a thorough explanation of why you can love both Delphi and Python at the same time: Get to know with Embarcadero Python Ecosystem Before we dive in more into the NetworkX4D GUI prototype, let’s get to know first about Embarcadero Python Ecosystem: And here are the Embarcadero Python Ecosystem licenses: Which you can know more about them from the following webinars by Jim McKeeth: What do we mean by graphs and network visualization? In mathematics, and more specifically in graph theory, a graph is a structure consisting of a set of objects, some of which are “related.” The objects correspond to mathematical abstractions known as vertices (also known as nodes or points), and each pair of related vertices is known as an edge (also called a link or line). A graph is typically represented diagrammatically as a set of dots or circles for the vertices, joined by lines or curves for the edges. Graphs are one of the topics covered by discrete mathematics. Graphs are the fundamental subject of graph theory. J. J. Sylvester coined the term “graph” in this context in 1878, referring to a direct relationship between mathematics and chemical structure (what he called chemico-graphical image). Networks are all around us, and they are extremely important in our lives. Communication networks, social media networks, biological networks, and so on are all examples of networks. Here are to name of a few examples of graphs and network visualizations: The World’s social networks shown as a network graph A protein interaction network Coauthorship network of 2014 breaking news articles What is the NetworkX graphs and network visualization Library? NetworkX logo. Image source: networkx.org. NetworkX is a Python package for creating, manipulating, and studying complex networks’ structure, dynamics, and functions. NetworkX provides: tools for studying the structure and dynamics of social, biological, and infrastructure networks; a standard programming interface and graph implementation suitable for a wide range of applications; a rapid development environment for collaborative, multidisciplinary projects; an interface to existing numerical algorithms and code written in C, C++, and FORTRAN; and the ability to work with large nonstandard data sets without difficulty. You can use NetworkX to load and save networks in standard and nonstandard data formats, generate many different types of random and classic networks, analyze network structure, build network models, design new network algorithms, draw networks, and much more. What are the prerequisites for the NetworkX Library? You will need […]
A few months ago, Jim Mckeeth and Yilmaz Yoru discussed the rapid evolution of technology and how Machine Learning and Artificial Intelligence are shaping the future. Those that are once a concept are now being materialized through various innovative technology with the help of modern ide software, programming languages, and libraries. During the webinar, Jim Mckeeth demonstrated a brain data measuring hardware known as the EMOTIV EPOC EEG Hardware, a gadget that is designed for scalable and contextual human brain research and provides access to professional-grade brain data. How does this innovative gadget work? As shown in the video, the gadget works by placing the electroencephalography (EEG) sensors on the scalp. This picks up and records the electrical activity in your brain. It gives a 3D real-time visualization of what parts of the brain are active. The records may not be as accurate as Elon Musk’s Neuralink, Emotiv still effectively shows which part of the brain is active as well as the wavelengths that can be manually configured using the Emotiv software. The reason why Mckeeth included this demonstration as part of the Machine Learning and Artificial Intelligence webinar is that the concept behind it is very much the same as the neural network. An artificial neural network is an attempt to simulate the network of neurons that make up a human brain so that the computer will be able to learn things and make decisions in a human-like manner. This is the same technology used in facial recognition software. While our brain is still more complex than all the computer hardware we have today, in theory, we can still achieve the same level of complexity with better computing power and better hardware. To see how this amazing device works, feel free to watch the demo video below.
Invormațiile pe cale Dvs le introduceți în prezentul formular nu se păstrează online, dar se vor transmite direct la destinație. Mai multe informații găsiți în Politica Noastră de Confidentialitate
We use cookies to ensure that we give you the best experience on our website. If you continue to use this site we will assume that you are happy with it.