If you've worked with Alyvix for a few years, perhaps you've noticed every so often that your test cases stop working for no apparent reason. If the underlying problem isn't an actual system fault (congratulations, your monitoring is working as intended!) then the cause is almost always a change in the interface that you're monitoring.
While some large "breaking" changes will obviously require you to create a new Alyvix test case, more often it's just a minor change, for instance Alyvix can't find a button that's been moved due to a software update, or a multi-user system has persistent window properties. In this best practices blog, I'll show you how you can build more robust test cases so that these minor interface changes won't interrupt your monitoring and keep you from rebuilding your test cases.
With larger software suites and web applications like SalesForce, you might tend to create two types of user-centric monitoring checks: shallow, panoramic test cases to make sure a large number of modules are working at a basic level, and then one or more deep, highly specific test cases to be sure a particular module is working across a range of its functionalities. This article shows you how to create a deep check, using SalesForce Cloud Edition as an example.
Monitoring is essential to keeping IT systems running smoothly. Alyvix Server's visual monitoring approach complements typical monitoring systems by directly measuring what users experience. You can explore these measurements graphically to find and certify severe latencies and service interruptions, potentially resolve them and even prevent them from occurring. That's the Alyvix Value.
End user experience monitoring continuously tests the performance of business-critical applications from the perspective of end users. It quickly tells you about any degradation in performance, responsiveness, or availability that users may experience. This can help you avoid significant problems that can lead to poor customer satisfaction, lost revenue, and negative brand impact.
Real User Monitoring and Visual Monitoring are both user-centric strategies to ensure that quality metrics important to users are maintained. Their underlying methodology, however, is different, leading to a separate set of advantages and disadvantages for each approach. The proactive nature of visual monitoring can help you discover and remedy problems before users even notice them.