Skip to content

ISP Tracker features, agent or service questions

Ask how a feature works, how agents work, anything related to non specific agent platforms
164 Topics 1.5k Posts
Notice: support.outagesio.com is now consolidated into support.isptracker.com

Automatically monitor your Internet service and provider with alerts to problems
Track Internet disconnections, provider outages with historical data, and automated speed testing.
For Windows, Linux, ARM64, ARMa7. Learn more by visiting www.isptracker.com
Notice: If you created an account on app.isptracker.com, simply use the same credentials to log into these support forums.
  • Software tutorial needed

    3
    0 Votes
    3 Posts
    4k Views
    J
    Hello Alex I reset the software in the taskbar and waited a day. There was no change in what appeared in the dashboard, so I ininstalled the software and reinstalled it. Setup Failed. 0x80070643 Fatal error during installation. This happened after I entered my user name and password that I use to open up the dashboard site. There was no other email listing a user name and password in the Agent 31518 Notification email. Regards, John
  • Reactivating a hardware agent

    17
    0 Votes
    17 Posts
    13k Views
    ISPtracker_SupportI
    Unfortunately, no. We ended up factory resetting them so that we could reload our firmware. Maybe some sort of DHCP/fixed IP mix up where the agent/s could no longer get a DHCP IP since they were expecting a static DHCP assignment. Really not sure. All I can tell you is that here also, they would not connect to our DHCP server, at all. I really do not know what happened but very happy that we were able to reload them and get them working. Please let us know how things are going after you use them for a while.
  • Speed Testing

    2
    0 Votes
    2 Posts
    4k Views
    M
    Great question. The main point of the article is that there is no way to get anything conclusive. It's just another test, another stat that needs to be taken into account with more information. Speed testing is definitely useful but is a moment in time test using shared bandwidth where the test can go across networks we have no control over and have no idea how they are set up. More importantly, most of these tests seem to terminate on CDNs which are optimized to cache data mainly for streaming and other media. That's why we used a big file so that the transfer could settle and we could (should) get a fairly sustained speed going. Even when using iperf which is a more real world test, it is inconclusive because the test travels over networks we don't control. The article is mainly questions like why did the transfer drop down to kilobytes when it's a 50Mpbs pipe that is barely being used? I don't think we are implying anything about the provider but pointing out that we simply don't know since we don't control their network. We can make some assumptions by testing different kinds of data to different locations but in the end, it is inconclusive. Yes, a web site could be slower but 30 seconds is way up there in terms of time to render pages. The point there was simply to see if there was something going on with the file transfer specifically or the overall bandwidth. Meaning, maybe the sustained speed caught the attention of an application manager which automatically tried to keep the transfer at a certain speed. No idea. One of the more interesting things during the testing was that we checked the hops to the destination and could see that the provider actually had something on the edge of the data center we were testing to. Meaning, our location, the provider, level3, back to the provider then into the DC. It seemed odd that we could never get anything close to 50Mbps. I'm getting about 2-300Mbps on my wired speedtest, it's still useful in that i will spot a relative degradation but i'm pretty sure if we used iperf or qperf to test between our connections we'd almost always get 1000Mbps. When you say test between your connections, do you mean with the same provider or would those tests go across multiple network owners to get from source to destination. Testing between servers in a data center or even to another data center managed by the same org usually always shows the correct throughput but that's because in these cases, the network owner is under obligation to make those speeds can handle all of the traffic required. Cable companies don't work this way, it's a best effort service that has 'acceptable' ranges to cover all their needs. I cannot speak with authority since I've never been part of a cable company but I've spent years fighting with them to get what we pay for and to get them to fix problems for us and customers when we offered ISP and MSP services. We did a lot of this kind of testing and while speeds can remain around what ever they tested at, most of the time, it's up and down as expected. My guess is the speedtest uses a very small file and so on my big connection it's measuring the time taken to establish the connection too which is why it nets out at about a quarter actually bandwidth? In fact, OTM tests against fast.com which has an interest in making sure that consumers are getting the bandwidth they are supposed to be getting. If you are seeing what you think is a set up time as a delay, it's possible since it has to spawn a process, hit at least three servers, calculate the results, etc. The browser test is instant if you go to fast.com. However, hardware agents that we sell do something different, similar to what you said. They run a shorter, smaller test to get a quick average. Unlike the usual speed testing that people are doing, we aren't really interested in fully saturating everyone's connections over and over again. The main purpose of our speed test is not about finding out what your maximum speed is but if you have usable bandwidth for your requirements. Our speed testing tools are still in their infancy as we try to find the best way to give the most useful information without fully saturating and using up all kinds of data since many are on data plans. The speed testing will end up being to and from our network only at some point when we can find the best way to get useful results. We've also tested limiting the speed to fast.com so as not to be using up users bandwidth/data or wonderful fast.com service. BTW, there are a lot of interesting blogs/articles about fast.com and why they built the test and made it download only. it's quite interesting. I hope this helps clarify the bit of testing we did that day :).
  • Why so many outages?

    3
    0 Votes
    3 Posts
    6k Views
    ISPtracker_SupportI
    Hi Peachy, Internally, we call this classifying the networks. For typical setups, this determination is quite accurate considering that both the local network and the providers often use private IPs. Because of the above, there is no way to be one hundred percent sure that the software got it right and in not so typical setups, human intervention is required. The user must know their own network in order to understand the results. In fact, this is something mentioned in these forums quite a bit, that in some cases, the agent cannot determine the local network from the providers and the user has to know this. I'll give you an example. Lets say a private network with multiple routers, gateways and no static. The algorithm might think that the first gateway it sees is yours because the next one is say four hops away. It cannot reliably determine where your network ends and where the providers starts. Lets say you have a couple of wireless routers being used as hot spots (APs) and in turns, those use your main router as their gateway. Then your providers IP is another private IP. The code may not be able to determine where your network ends and where the providers starts. In some cases, providers control everything from their plant to the Ethernet port or access point they provide at the customers home. All these devices are on the providers network and the customer gateway is inside the providers network. The code may show that the gateway is where the local network ends. While software can do a lot, in non typical setups, it is assumed that the user knows their own environment and can take this into consideration. There is something on the table about allowing the user to 'adjust' their networks so the algorithm is aware of each network. For example, the user could specify which hops are in their own network and the algorithm would simply know this when it tries to classify the networks. The classification code will get better as we get feedback and read about how members are using the service. We do have tests happening even as I type this and as we confirm something works, are able to constantly replicate a good result, then we look at adding it into production. There are a few things being tested in terms of the classification, it just takes time to prove or disprove it. Great question.