For teams managing remote conservation areas, wildfire risk is becoming harder to predict and harder to plan for, even in regions that have not historically been considered fire prone. Rising temperatures and longer dry periods are changing how fire behaves across forests, reserves, and protected land. One recent study found that 83.9% of wildfire-vulnerable species are now exposed to increased fire risk, with fire seasons projected to more than double in some regions. This isn’t limited to traditionally fire-prone zones. Fire seasons are lengthening, and fire behaviour is becoming less predictable and harder to contain.

You may already be seeing the signs: vegetation staying dry for longer, water sources becoming less reliable, and more ignition points across a wider area. For smaller teams covering large territories, this shifts wildfire from a seasonal concern to an ongoing operational risk.

Why remote sites are exposed

Remote conservation areas come with structural challenges that make wildfire response harder. Teams often work across large, varied terrain with limited visibility, minimal infrastructure, and inconsistent or absent cellular coverage. Ranger teams are small, but the areas they cover are not.

When a fire starts, response depends heavily on what you can detect and communicate locally. In Madagascar, one protected reserve lost around a third of its forest in a single year due to wildfire pressure linked to rising temperatures and prolonged dry conditions. Events like this highlight how quickly impact scales when detection or response is delayed.

Distance from emergency services also increases pressure on conservation teams, while budget constraints shape what can realistically be deployed. And when habitats support endangered species, even a single event can cause long-term ecological damage.

Wildfire-Image

What monitoring looks like today

Wildfire monitoring is typically shaped by scale and budget. In practice, teams may rely on ranger patrols and visual observation, weather tracking such as temperature, wind, and humidity, external satellite data sources, camera systems, or WAN sensor deployments.

These tools are valuable. They provide signals, support situational awareness, and in some cases trigger real time responses. However, while the capability to monitor exists, the limitations of existing approaches are largely practical. Detection may depend on chance observation and can come too late, while reliance on cellular communication can fail under the stress of an unanticipated emergency that disrupts the very channels needed for response.

Even when sensors are deployed, reliably transmitting alerts can be difficult. Data may be captured but not delivered effectively, creating a gap between detection and awareness. Constraints like these become more visible as wildfire risk increases.

 

Why time is of the essence

Delays in detecting early-stage fires reduce the available window for intervention. In fast-moving conditions, even short delays can significantly increase the scale of an incident. At the same time, teams may be distributed across the landscape, and lone workers need reliable communication for both safety and coordination.

The 2024 Jasper National Park wildfire shows how quickly situations can escalate, even in well-managed environments. The event led to around 25,000 evacuations and the loss of hundreds of structures. In more remote settings, with fewer resources, the margin for delay is even smaller.

 

Detection works. Delivery is the problem

Deploying sensors to monitor temperature, humidity, and smoke across high risk areas can provide meaningful early signals from systems that run for long periods on minimal power.

The FireFly project in Northern Thailand is a strong example. Distributed sensor nodes monitored forest conditions and identified early fire risk, with UAVs used to confirm ignition points. The system was designed specifically for remote, low infrastructure environments with cost in mind. But field observations highlighted a recurring issue: antenna placement and enclosure design affected system reliability, while dense vegetation and uneven terrain disrupted connectivity. Environmental conditions directly influenced whether data could leave the site. In other words, detection worked, but alert delivery didn’t always follow.

The same pattern appears in field-based wildfire and peatland monitoring projects more broadly. Sensors can detect early stage fire risk, land degradation, or changing environmental conditions, but the value of those systems depends on whether data can leave the site quickly and reliably. Studies of IoT wildfire detection systems highlight communication reliability, limited cellular coverage, packet loss, latency, and energy consumption as practical deployment challenges in remote environments. The communication channel, or “last mile” connection, is therefore critical to whether early detection becomes timely awareness.

 

Solving the last mile with Satellite IoT

In remote conservation areas, terrain, vegetation, and distance from infrastructure all affect signal performance. Systems that depend on terrestrial networks introduce gaps and points of failure. Satellite IoT removes that dependency.

By integrating a compact modem such as RockBLOCK 9603, a sensor monitoring system can send data over the Iridium satellite network with no reliance on local infrastructure. That creates a direct path from your sensor to you, without relying on local coverage.

In practice, the workflow is simple:

  • Sensors monitor defined environmental thresholds
  • Local logic determines when conditions require attention
  • A short alert message is generated
  • That message is transmitted via satellite to your team.

Messages remain small, and transmission can be event driven, supporting low power operation and long deployment lifetimes. For conservation teams, this enables a focused deployment model: a limited number of sensors placed in high risk areas, with a communication path that remains consistently available.

When a fire is detected, the system alert is delivered, reliably, and in time to act.

RockBLOCK-9603-annotated-for-wildfire-blog-post

When you need more field capability

For some deployments, a compact satellite modem may be enough to connect an existing sensor system. For others, the monitoring setup needs to handle multiple sensor inputs, apply logic locally, and decide when an alert should be sent.

That’s where a device such as RockBLOCK RTU can be useful. It aggregates sensor inputs from a range of sources and applies threshold logic in the field. When defined conditions are met, it generates alerts and transmits telemetry via its satellite connection.

This approach reduces dependency on continuous connectivity and avoids the need to send raw data elsewhere for processing. Decisions are made where the data is generated, according to the conditions being monitored. Reducing unnecessary data transmission also helps keep satellite costs to a minimum.

In practice, that gives you:

  • One unit integrating multiple sensors
  • Configurable thresholds based on your environment
  • Event-based alerting instead of continuous transmission
  • Context included with each alert.

The right approach depends on the scale of the site, the number of sensors required, and how much processing needs to happen in the field.

RockBLOCK-RTU-used-for-wildfire-sensor-data-transmission

Designing for increased wildfire risk

Effective wildfire monitoring systems prioritize early detection and dependable alert delivery. They need to operate with limited power, minimal infrastructure, and changing environmental conditions, while remaining simple enough to deploy and maintain and reliable enough to trust when something happens.

Recent events reinforce the need for this approach. Reliability, simplicity, and clear information delivered to the right people at the right time can protect lives and support more effective response.

Facing a remote monitoring or alerting challenge?

If your team needs to detect environmental risk, transmit alerts from areas without reliable cellular coverage, or keep remote systems connected, we can help you explore the right satellite IoT approach.

Complete the form, or email hello@groundcontrol.com to discuss your use case with our team – we’ll reply within one working day.

Name
Privacy Policy

In high risk military environments, reliable communication is critical, but it can also create risk. Every transmission from a radio, satellite phone, or mobile device generates a signal. In stable environments, those signals enable coordination and operational effectiveness. In contested or electronically monitored settings, they can become liabilities, exposing teams to detection, interception, or exploitation.

For military communications teams, special operations planners, and defense capability leads, this creates a difficult operational challenge: how to deliver critical information to personnel in the field without increasing their electronic signature.

Zero-transmit devices offer a different approach. By allowing personnel to receive messages without sending anything back, they provide a discreet communication channel for scenarios where transmitting from the field may compromise operational security.

 

The Hidden Risk in Traditional Communications

Most conventional communication systems are built around two way exchange. Radios, cellular devices, and satellite communications all rely on outbound signals to send information, establish connectivity, or acknowledge receipt.

For decades, this has been the foundation of operational coordination. But the same characteristics that make these systems useful can also create vulnerabilities. Whether it’s a handheld radio checking into a network, a satellite phone establishing a connection, or a mobile device searching for coverage, each transmission emits radio frequency energy that may be detected and analyzed.

For covert military units and special operations teams, this can compromise mission integrity and personnel safety. A single transmission may be enough to indicate that a unit is present in a contested area. Even encrypted communications, while protecting message content, can still expose metadata such as signal origin, timing, frequency, and transmission behavior, all of which may provide useful intelligence to an adversary.

 

Operating Where Transmissions Create Tactical Risk

Military and defense organizations increasingly plan for operations in denied, degraded, and disrupted environments. In these settings, the issue isn’t just whether a message can get through, but whether sending or acknowledging a message creates additional risk.

For covert teams, forward-deployed personnel, special operations units, and others working in surveillance heavy environments, this creates a difficult trade-off. Teams need to receive updates, alerts, or instructions, but transmitting from the field may expose their position, pattern of movement, or operational presence.

Zero-transmit communication changes that model. By removing the need for the endpoint device to transmit, acknowledge, or handshake with a network, critical information can be delivered without creating an RF footprint from the user’s location.

This doesn’t replace two way communications in every scenario; rather, it provides an additional channel for situations where receiving information safely is more important than maintaining a continuous two way link.

Receive-Only Satellite Messaging with No Endpoint RF Footprint

RockSTAR Burst is a receive-only satellite messaging device designed for military and defense teams that need to receive critical information without transmitting from the field.

Messages are sent via the Iridium Burst® service and delivered directly to authorized devices, where they can be received and decrypted without any outbound communication from the device itself. There’s no handshake, no acknowledgement, and no return signal from the endpoint.

This creates a secure one way channel for delivering mission updates, alerts, or instructions to personnel operating in covert, contested, or surveillance-heavy environments.

RockSTAR-Burst-Pager
Rockstarburst-Diagram

Because RockSTAR Burst doesn’t transmit, it creates no RF footprint at the user’s location. This significantly reduces the risk of detection, geolocation, or targeting based on endpoint transmissions, while still allowing command teams to reach personnel in the field.

Messages can be sent to individual users, designated operational groups, or entire fleets and convoys, enabling rapid dissemination of time sensitive information across dispersed teams.

Leveraging Iridium’s Low Earth Orbit satellite network, RockSTAR Burst provides global reach and near-real time message delivery, typically in fewer than 20 seconds.

Although designed primarily for outdoor use, Iridium Burst® transmissions can penetrate some buildings, partial obstructions, and adverse weather conditions, helping maintain message delivery in challenging field environments.

From Technology to Tactical Advantage

The value of RockSTAR Burst becomes clearest when mapped to military operational needs. Command teams can push intelligence, alerts, or mission updates to personnel in the field without requiring those personnel to check in, acknowledge receipt, or expose their position through outbound RF activity.

This is particularly valuable when teams need to maintain a low electronic signature but still remain informed. A change in tasking, threat warning, movement instruction, extraction update, or short mission critical alert can be delivered without asking the endpoint device to transmit.

RockSTAR Burst isn’t intended to replace two way tactical communications. Voice, data, and command and control systems remain essential in many operational scenarios. Instead, it adds a discreet one way channel that can sit alongside existing communications, giving commanders another option when transmitting from the field is undesirable or unsafe.

Used in this way, RockSTAR Burst supports a layered communications strategy: two way systems where interaction is necessary, and receive-only satellite messaging where the priority is to deliver information without increasing the user’s RF signature.

 

The Case for Zero-Transmit Devices in Modern Military Operations

As military operating environments become more complex, contested, and electronically monitored, the assumptions behind traditional field communications are being challenged.

More connectivity is not always better. In some scenarios, transmitting from the field can increase risk by creating an RF signature that may reveal a team’s presence, activity, or location.

RockSTAR Burst reflects a different approach. By combining global satellite reach, encrypted message delivery, targeted broadcast capability, and zero-transmit operation at the endpoint, it gives military and defense organizations a discreet way to keep personnel informed without increasing their RF footprint.

It doesn’t replace two way tactical communications, but it does add an important option for situations where the safest communication is one that does not require the field user to respond.

Achieve Zero RF Footprint for Operational Advantage

Our satellite-enabled RockSTAR Burst solution offers robust communication and connectivity for defense applications and more. If you need a zero-transmission, secure and controlled communication solution in hostile or degraded environments, we can help.

Partner with us to explore all our satellite solutions that safeguard your military operations anywhere in the world. Just complete the form, or email hello@groundcontrol.com and we’ll reply within one working day.

Name
Privacy Policy

April 2026’s reports of seismic activity and tsunami warnings in Japan have again highlighted how critical early warning systems are. Events like these reinforce a consistent reality: detection is only part of the system. The ability to communicate alerts quickly and reliably remains central to reducing impact.

As landslide, earthquake and tsunami monitoring systems evolve, this communications challenge is becoming more complex. Monitoring is moving beyond single parameter approaches toward multi-sensor systems that integrate different data types to improve situational awareness and reduce false positives. At the same time, research institutions are applying machine learning and deep learning techniques to identify patterns that may be difficult to detect through rule-based models alone.

These developments increase system capability, but they also change system requirements. More sensors generate more data. AI-driven approaches require datasets that are larger, more continuous and better contextualized. As a result, monitoring system design now has to account not only for detection, but also for power, data volume, transmission frequency, and the role of processing at the edge.

Detection is only useful if the alert gets through

Earthquake Alert lost  versus alert delivered image

Detection capability has improved significantly, and monitoring systems can often identify early signs of instability. But detection only matters if alerts reach the right people in time. That remains difficult in remote terrain. Monitoring sites are often located where infrastructure is limited, ground conditions are unstable, and access is restricted. Power depends on what the natural landscape allows, while cellular networks may be unavailable, unreliable, or vulnerable during an event.

As a result, a system can continue collecting data even when its communications path fails. This creates a gap between detection and action, reducing the value of the system no matter how capable the sensing layer is. International frameworks on early warning systems highlight that coverage is improving globally, but reliability and last-mile delivery remain key challenges.

Satellite connectivity can help close that gap. Because it does not rely on local infrastructure, it provides an independent communications path for remote or vulnerable locations. Ground Control’s earlier work in tsunami early warning systems in Thailand demonstrates how satellite connectivity can support resilience and last-mile data delivery, helping ensure that critical alerts can reach emergency response systems when local infrastructure is limited or unavailable.

Natural hazard monitoring is becoming more data intensive

As scientific research continues to evolve, natural hazard monitoring systems can generate a broad range of data. Traditional threshold-based systems typically produce discrete, event driven messages. Multi sensor deployments and research programs, by contrast, may generate continuous and contextual datasets that support analysis, model development, and validation. Within a single monitoring system, data may include:

  • Time critical alerts
  • Ongoing telemetry
  • Device and system health data
  • Larger datasets used for analysis and research
  • Photographic, mapping, audio, or video data.

Modern systems may combine LoRaWAN sensor networks, remote sensing methods such as radar, terrestrial and non-terrestrial communications, and both edge and cloud processing. Satellite devices are introduced into these systems to address coverage gaps, provide an independent communication path, or support resilience where terrestrial networks are limited.

Data scale growth graph image

The key point is that not all data behaves in the same way. A short emergency alert has very different requirements from periodic telemetry or a large dataset used for research. In practice, satellite IoT devices can support different roles depending on the size, urgency and value of the data being transmitted.

Three types of data, three connectivity roles

In earthquake, landslide and tsunami monitoring, the connectivity question encompasses what kind of data needs to move, how urgently it needs to move, and how much processing should happen before it leaves the site. Broadly, data requirements fall into three categories:

Data type
Typical behavior
Main requirement
Satellite IoT role
Alerts
Small, urgent, event driven
Must get through
Resilient short message transmission
Telemetry
Regular, structured, operational
Efficient visibility over time
Periodic monitoring and backhaul
Research and AI data
Larger, richer, less time critical
Filtering, storage and selective
Edge processing and higher capacity

1. Time critical alerts: small messages, high consequence

For alert generation, the desired output may be a critical message triggered by defined thresholds or rules. When conditions are met, an alert can be generated at the device level and transmitted as a short message. This reporting by exception approach reduces dependence on continuous connectivity and helps keep satellite airtime costs to a minimum. Alerts are triggered by local conditions and transmitted when a predefined perimeter is breached.

This isn’t to suggest that hazard monitoring is simple. Rather, some parts of the system still depend on very small, high priority messages: a threshold has been crossed, a device has changed state, or an alarm needs to be raised.

Devices such as RockBLOCK RTU are designed for this type of integration and event driven monitoring. Supporting multiple sensor inputs and enabling local data batching at the edge, the RTU allows data output to remain minimal in size but critical in importance.

This reflects the same principle seen in the tsunami early warning system mentioned earlier, where the priority is ensuring that critical signals can be generated and transmitted under constrained conditions. The RTU also offers sensing, data logging and action on basic threshold triggers. It’s not designed for high level data processing, but it can play an important role in raising an alarm, warning a community, and ensuring that the message gets through.

Earthquake RockBLOCK RTU decision at the edge

2. Telemetry: maintaining visibility between events

Diagram RockBLOCK Pro and sensor data Earthquake warning

Alerting is only one layer of a monitoring system. Beyond emergency messages, earthquake and landslide monitoring systems also require ongoing visibility into environmental conditions and system status. Telemetry requirements may include periodic sensor readings, device diagnostics, system health information, and environmental trends over time. This data supports the interpretation of conditions leading up to and following an event. It can also be used to validate system performance and support operational decision making.

Here, architectural decisions often depend on project cost, power budget and the frequency of transmission. Compared with short alert messages, telemetry may require greater data capacity and more regular communication. It remains structured and predictable, but introduces additional considerations around bandwidth and power usage.

For these purposes, devices operating over services such as Iridium Messaging Transport (IMT) may support this type of data flow. RockBLOCK Pro delivers faster throughput and enables larger payloads than SBD, supporting aggregated sensor data, images and audio clips up to 100kB. This provides more flexible data transmission patterns compared to low bandwidth messaging.

RockBLOCK-Pro-Web-Angled

As an IP66 rated terminal, RockBLOCK Pro has a rugged design with a built in Iridium Certus antenna. Its combination of GNSS and serial interfaces (such as RS232/RS485) allows it to integrate with external systems or data sources, acting as a communications layer for structured telemetry and providing a means to transmit aggregated or processed seismology and landslide data. This may include:

  • Ground movement sensors such as geophones and accelerometers
  • Tilt and deformation sensors for slope and structural monitoring
  • Pressure and moisture sensors for groundwater and subsurface conditions
  • Threshold-based triggers such as seismic switches for alert activation
  • Environmental sensors including rainfall and wind
  • Serial connected instruments using RS485 or RS232
  • USB field access for configuration, data retrieval and maintenance.

The introduction of RockBLOCK Pro for backhaul or resilience provides additional monitoring capability and a significant increase in capacity to support a wider remote natural hazard monitoring system.

3. Research and AI workloads: when raw data is too large to send continuously

As monitoring systems expand to support research and model development, data requirements extend beyond alerts and telemetry. These datasets may include high resolution sensor data over extended periods, multi sensor correlations across locations, and inputs used for training and validating analytical models. This type of data is higher in volume and less time sensitive, but still requires a reliable path from remote environments.

Systems often store or buffer data locally and transmit it based on available bandwidth, power and connectivity. This may involve scheduled transfers, event-based uploads, or selective transmission of processed data. Devices such as RockREMOTE Rugged support this role by combining higher throughput connectivity with embedded compute capability. They act as an interface between field deployments and cloud-based systems, enabling data handling, filtering and integration with external platforms.

At this point, the device’s role isn’t limited to communication; it becomes part of the data management architecture. High frequency sensing, particularly in seismic monitoring, can generate more data than can be transmitted continuously over constrained links. Local processing allows this data to be reduced before transmission. Tasks such as filtering, segmentation and feature extraction can be applied at the point of collection, allowing derived parameters to be transmitted in place of raw data.

This preserves the characteristics needed for analysis while maintaining manageable data volumes. Edge computing can also support lightweight analytical models at the edge, depending on the application deployed.

These models, typically trained on historical datasets, can be applied to live data streams to identify signals of interest. This may include distinguishing between background activity and patterns associated with instability or early seismic events. In these scenarios, transmission is based on relevance rather than volume. Data is prioritized according to its analytical value, rather than transmitted continuously.

RockREMOTE Rugged and Earthquake data

RockREMOTE Rugged’s Linux-based environment supports custom applications, enabling user-defined data processing and integration and allowing custom processing pipelines or models to be deployed at the edge. Local storage enables data retention where continuous transmission is not practical, while connectivity over Iridium Certus 100 and cellular networks provides a path for data to move to cloud environments when required.

This supports a range of system behaviors, including:

  • High frequency data capture with selective transmission
  • Local feature extraction to reduce bandwidth requirements
  • Model inference at the edge to support early interpretation
  • Buffered storage for later retrieval or batch upload
  • Video compression before transmission
  • Running real time tasks such as filtering, segmentation, or frequency-domain analysis
  • Saving high resolution data locally
  • Transmitting exception summaries via satellite.
RockREMOTE Rugged Feature callout for Earthquake monitoring

 

In earthquake and landslide monitoring, the value of this compute power is in its ability to manage complex data flows locally, reduce unnecessary transmission, and support more autonomous system behaviour through locally defined logic or processing in remote environments.

Satellite IoT as part of the monitoring infrastructure

Satellite IoT as Monitoring Infrastructure diagram

The evolution of landslide and earthquake monitoring systems is shaped by two parallel developments. The range of observable data is increasing through multi-sensor integration, remote sensing and advanced analysis. At the same time, environmental and operational constraints remain consistent. Monitoring sites are often remote, power limited and difficult to access. Communications infrastructure may be unavailable, unreliable or exposed to the same hazards the system is designed to monitor.

Within this context, connectivity supports the movement of different types of data, from time critical alerts to larger datasets used for analysis. Satellite enabled devices extend coverage and enable communication where other infrastructure is limited. Different device types support different roles within the system. Some are suited to edge-based alert generation. Others support structured telemetry and system visibility. Higher capacity devices with embedded compute power can help process, prioritize and transmit larger datasets for research and AI-assisted monitoring.

The most effective system design starts with the data: its urgency, size, frequency and operational value. From there, satellite IoT can be used as a resilient layer within a wider monitoring architecture.

Building the connectivity layer for modern monitoring systems

Whether you’re building threshold based alerts, expanding telemetry, or exploring edge processing for AI driven monitoring, the challenge is the same: getting the right data through, at the right time, under real world constraints.

We work with system integrators, scientists, and engineers, to design connectivity architectures that balance power, cost, data volume, and resilience across satellite and hybrid networks.

Complete the form or email hello@groundcontrol.com, and we’ll be in touch within one working day.

Name
Privacy Policy

From commercial shipping lanes and port approaches to offshore energy platforms and autonomous vessel trials, trusted Positioning, Navigation, and Timing (PNT) underpins safety, efficiency, and regulatory compliance. Every decision from route optimization to collision avoidance relies on accurate and continuous positioning data, and for decades, Global Navigation Satellite Systems (GNSS), including GPS, GLONASS, Galileo, and BeiDou, have served as the backbone of maritime PNT.

These systems provide global coverage, high accuracy under ideal conditions, and enable reliable tracking of maritime operations at scale. However, there is a well documented rise in maritime GNSS/GPS disruption, including deliberate jamming and increasingly sophisticated spoofing attacks. The reality is, conventional GPS-based solutions were never designed for today’s contested, congested, and adversarial signal environment, with traditional GNSS/GPS signals increasingly exposed in ways that were not anticipated when they were first deployed.

Traditional mitigation and defence strategies attempt to address these risks but often fall short; as a result, existing GPS-based solutions are reactive rather than resilient. For instance, they can identify when something is wrong, but are not inherently designed to guarantee reliable, trusted positioning when GNSS is compromised. The result is operational risk that extends beyond safety and security, and to efficiency, insurance exposure, regulatory compliance, and ultimately, trust in maritime systems. This is where Assured PNT enters the conversation and where Iridium PNT positions itself as a fundamentally different approach.

This blog explores how existing maritime GPS solutions are no longer equipped for today’s evolving threat landscape, and how Iridium PNT enables reliable, trusted, and continuous maritime operations in compromised GNSS/GPS environments.

The Cracks in Conventional Maritime GPS

1. Anti-Jamming GNSS Systems

Anti-jamming GNSS systems were developed to suppress interference using directional antennas and filtering techniques. These methods work by attempting to block out noise and prioritise signals that appear clean. However, the threat landscape has evolved beyond simple interference. Modern threats don’t just jam, they deceive. Combined jamming and spoofing attacks create ambiguous signal environments where systems must decide which signals are real and which should be ignored. The result is confusion, with decision making becoming uncertain precisely when certainty is required.

2. Anti-Spoofing Technologies

Anti-spoofing solutions attempt to validate whether a signal is genuine or manipulated, and often rely on similar logic and assumptions as anti-jamming technologies of rules-based detection and signal validation. But the logic and assumptions are increasingly outdated. As spoofing techniques have become more advanced with greater precision and adaptability, these systems have struggled to keep pace. Signal mimicry is more precise, timing offsets are subtler, and attack patterns are adaptive – more closely resembling legitimate behaviour. This leaves anti-spoofing approaches in a reactive rather than predictive position, constantly trying to respond to threats that are evolving faster than the defences designed to stop them.

3. Multi-GNSS Receivers

Using multiple constellations (GPS, Galileo, GLONASS, BeiDou) is often framed as a way to improve resilience, but in practice, it introduces more inputs without addressing the core weakness. GPS, Galileo, and other systems share similar signal structures and operate in comparable frequency ranges, which makes them vulnerable to the same types of interference and spoofing. When disruption occurs, it tends to affect all constellations in similar ways, meaning having more signals does not equate to having more trustworthy information. If one is compromised, the likelihood is high that others are affected as well. This creates a false sense of redundancy, i.e., more inputs, but not more independence.

4. Differential GPS (DGPS)

DGPS enhances positional accuracy using ground-based correction signals, and in stable environments, it performs well. But accuracy is not the same as trust. DGPS still depends on the integrity of the underlying GNSS signal, and in a jamming or spoofing scenario, it remains vulnerable in contested environments. In fact, it can amplify risk by making incorrect positioning appear more precise, giving maritime operators a false sense of confidence in data that may already be compromised.

5. Terrestrial Backup System

Terrestrial backup systems (e.g., eLoran, radio navigation systems) provide an alternative to satellite-based positioning by using shore-based infrastructure. While effective in coastal areas, their usefulness diminishes rapidly beyond those boundaries. Coverage is inherently limited, and the cost and complexity of deploying such systems at scale make them impractical for global maritime operations. For vessels operating in the open ocean, these solutions cannot provide the continuity required for safe and efficient navigation.

 

The Core Issue is Dependency on a Single Domain

Taken together, these approaches reveal a shared limitation: they attempt to improve or protect GNSS/GPS, but they don’t remove dependence on it. Whether through signal reinforcement, interference detection, or redundancy within the same domain, the underlying dependency remains intact.

It’s worth noting that modern bridge systems can present a layered navigational picture by combining GNSS with radar, AIS, INS, and ECDIS. That improves redundancy, but in most merchant vessels GNSS still provides the primary position input, so interference or false data can still degrade overall situational awareness unless it is independently cross-checked.

Ultimately, there are a limited number of truly independent alternatives to fall back on, as most existing mitigations operate as layers within the same ecosystem rather than as genuinely distinct sources of truth. As a result, what appears to be redundancy is, in many cases, simply duplication within a shared vulnerability.

This is the critical gap that Iridium PNT and RockFLEET Assured are designed to address. Rather than attempting to further fortify GNSS-dependent systems, Iridium PNT reduces reliance on any single domain altogether, enabling a more secure, resilient, and multi-domain approach to assured navigation.

 

Solving GNSS Dependency with A-PNT and Iridium PNT

Assured Positioning, Navigation and Timing (A-PNT) is the idea of maintaining trusted position, navigation, and timing when GNSS is degraded, denied, or untrusted. It goes beyond basic capability to include resilience, integrity, and trust, helping vessels maintain safe navigation, operational continuity, and compliance despite interference.

Iridium PNT is one way of delivering that resilience. It’s the only commercially available satellite-based PNT service that operates independently of GNSS, meaning it doesn’t rely on GPS, Galileo, GLONASS, or BeiDou. In a landscape where most backup solutions still depend on the same GNSS/GPS signals, that independence is critical.

Unlike conventional GNSS, which relies on Medium Earth Orbit (MEO) satellites transmitting very weak signals over vast distances, Iridium PNT operates from a Low Earth Orbit, or LEO, constellation, bringing satellites much closer to Earth. This results in stronger signals and contributes to greater resistance to jamming and spoofing in real world maritime environments.

GNSS RFA-Diagram-2026CLS

RockFLEET Assured for Resilient Maritime Navigation

 

RockFLEET Assured represents a necessary shift away from single GNSS/GPS signal dependency. By leveraging the independent and highly secure Iridium PNT signal, vessels aren’t left without a trusted source of navigation data in the event of GPS jamming, spoofing and denial. This enables uninterrupted operations across open ocean, congested shipping lanes, and high risk regions where jamming and spoofing activity is increasingly prevalent.

For optimum navigational assurance, RockFLEET Assured continuously compares the trusted Assured PNT position with GNSS and raises alerts when position integrity is at risk. Rather than relying on GNSS alone, it uses an authenticated Iridium PNT position source to help identify anomalies and highlight when GNSS may be jammed, spoofed, or otherwise unreliable.

By cross-checking GNSS against trusted A-PNT data, RockFLEET Assured helps reduce the risk of false positioning and supports safer navigation and better operational awareness. Beyond resilience, it’s also engineered for practical deployment supporting cable runs of up to 100 m for flexible installation, and an optional backup battery that can continue tracking and reporting if vessel power is interrupted.

Annotated-Graphic-of-RockFLEET-Assured

From Accuracy to Assurance

In today’s maritime and security operations, the core challenge extends beyond positioning accuracy, but trust in the data itself. Existing GNSS/GPS-based solutions, even when layered with mitigation technologies, remain dependent on a single domain that is increasingly exposed to disruption, deception, and interference. This creates a critical gap in trusted, continuous navigation at sea, particularly in contested or high risk environments where GNSS/GPS reliability cannot be assumed.

Iridium PNT and RockFLEET Assured directly address this gap by introducing a truly independent and resilient source of positioning data that operates outside the limitations of traditional GNSS. By combining multi-domain inputs with real time integrity assessment and prioritizing assured, trustworthy signals, they move maritime navigation from reactive detection to proactive resilience. The future of navigation at sea will be defined by this ability to operate with confidence in uncertain and contested GNSS/GPS environments. A-PNT is central to that future, and RockFLEET Assured is built to deliver it.

Trusted A-PNT For Navigational Certainty at Sea

For over 20 years, we’ve delivered resilient satellite solutions for remote connectivity and secure communications. We’re proud to support commercial shipping, offshore operators, and maritime security providers with dependable satellite connectivity and assured positioning capabilities designed for the realities of the modern maritime domain.

If you want to offer A-PNT solutions as part of the security strategy for your maritime clients, complete the form, or email hello@groundcontrol.com and we’ll reply within one working day.

Name
Privacy Policy

In 2024, a nationwide AT&T outage disrupted emergency communications, and affected access to 911 services in the US. More than 25,000 attempts to reach 911 were blocked, and service was disrupted for more than 125 million devices. At the same time, multi-state 911 outages continue to occur. Increasingly, for emergency response, the issue is not just whether networks are available, but how they perform when conditions change.

For the teams responding to calls or disaster events, working with real time video, data, and mobile command environments as part of day to day operations brings a different kind of pressure. Coverage alone is no longer enough; what matters is whether the connectivity remains usable under pressure, across networks, locations, and conditions.

Operating across multiple networks such as FirstNet or commercial LTE, traditional satellite, and increasingly LEO services such as Starlink introduces new complexity and inefficiency. Those issues now need to be addressed if emergency teams are to stay reliably connected.

 

Why Performance Matters as Much as Coverage

You can have a strong signal and still struggle to get the performance you need. During major incidents, networks rarely fail completely; congestion builds, latency increases, packets drop, and throughput becomes inconsistent. Even with signal present, performance becomes unpredictable. The FCC has highlighted how disasters expose these types of resilience gaps.

At the same time, operational demands are increasing. With the transition to Next Generation 911 (NG911), video, images, and real time data are becoming part of standard workflows. If communication links drop, the result is video feeds that struggle to stay stable; slower or unreliable access to CAD and GIS systems; and inconsistent performance between vehicles, command posts, and field teams.

Multiple networks: LTE, 5G, LEO satellite, and legacy GEO systems, are often all in play, whether by design or through gradual adoption. And this reflects a broader shift. Managing multiple technologies and networks is now part of the operational reality. So the challenge isn’t whether there is enough connectivity. It’s how well those networks work together when it matters.

Email-Multipath-5 Emergency services

Why Failover Based Connectivity Can Fall Short

Many setups still rely on failover. One network is primary, and others act as backup. It works when a network drops completely. It doesn’t work as well when performance degrades. In most incidents, what you actually see is:

  • Increasing latency
  • Packet loss
  • Reduced throughput
  • Unstable performance.

But failover only responds once a threshold is crossed, and by then, performance has already dropped below what applications need. There’s a delay between degradation and recovery, and during that time, services like video, VoIP, and real time data are disrupted.

It also means you’re not making full use of the networks available to you. Failover reacts to failure. It doesn’t actively optimize performance, which is the key.

Adding Starlink or Multiple Networks Doesn’t Solve the Problem Alone

You may already have addressed coverage gaps by adding services like Starlink. That improves reach and bandwidth, especially in remote or hard to cover areas. But adding more networks introduces its own complexity: multiple providers and contracts, different data plans and cost models, networks with very different performance characteristics.

Without coordination, those networks sit alongside each other, rather than working together. This often leads to manual switching between connections; static rules that don’t reflect real time conditions; uneven data usage across devices, and limited visibility into what’s actually happening across the whole response setup. So while you have more connectivity available, it’s not always being used in the most effective way.

What Multi-Network Connectivity Should Look Like in Practice

The shift here is not about adding more. It’s about changing how you use existing services, managing continuous, multi-network connectivity. In practice, that means multiple connections active at the same time and traffic routed dynamically based on real time conditions.

Instead of waiting for a connection to fail, the system adapts continuously, using the most appropriate network path at any given moment, based on latency, packet loss, and bandwidth. This is the principle behind Dejero Smart Blending technology, which routes traffic across multiple connections in real time rather than switching between them.

The outcome is more consistent performance, with fewer connectivity interruptions and less need for manual intervention. Making reliability less about network uptime, and more about service usability. It supports the wider move toward IP-based emergency communications, where video, data, and voice increasingly need to work across different networks and locations. And supports what matters most: not the status of one link, which is still important, but whether the overall service remains usable and effective at all times, providing true resiliency.

How Ground Control Multipath Optimizes Connectivity

Ground Control Multipath is designed to help you bring all of your networks together into something that works as a whole. It builds on what you already have: your existing LTE and 5G connectivity, your current satellite services, including LEO and GEO, and additional capacity where it makes sense.

Those connections are then managed through a routing layer that continuously selects the most appropriate path based on real time conditions. From your perspective, that means less time managing networks individually, more consistent performance across devices and locations, shared data usage instead of isolated plans, and reduced reliance on manual failover. The goal is not to replace your existing setup; it’s to make it work more effectively as a system.

The Multipath and Dejero TITAN fit

Ground Control Multipath provides the overall solution. It’s designed around your operation, and brings your available networks together so they work more effectively as one. Dejero TITAN is the device that brings your connectivity together, helping to manage multiple live connections, ensuring a seamless switch between networks, or combining them where needed, making bandwidth usage efficient and reducing your costs. In other words, Multipath is the overall service approach; TITAN is the enabling technology that can deliver it.

What This Means for Your Operations

When your networks work together properly, you see more consistent performance across video and data services, even when individual networks degrade. You make better use of the connectivity you’re already paying for, with data shared and optimized across the deployment. And you gain confidence in how your systems will behave during an incident, not because networks don’t fail, but because your setup adapts when they do.

Most agencies now operate across multiple networks, whether intentionally or not. The next step is making those networks work together. So this is not about adding more connectivity, it is about improving how it’s used. Good coverage is no longer enough. What matters is consistent, usable performance when it matters most.

Ground Control Multipath Dajero

Review Your Current Connectivity Setup

If you want to make better use of the connectivity you already have, we can help. Our team offers a no cost review of your current setup to identify where performance can be improved and costs reduced. It’s a practical conversation based on how you operate today.

Complete the form or email us at hello@groundcontrol.com and we’ll get back to you within one working day.

Name
Privacy Policy

Maritime, aviation, and defense operations all depend on GPS for positioning, navigation, and timing. But, as our infographic highlights, that dependence comes with real risk: GPS is vulnerable to jamming and spoofing, and relying on it alone creates a single point of failure.

That risk is becoming harder to manage. Reported interference incidents continue to rise across key regions, including a 127% rise in Baltic incidents between Q1 and Q2 2025, more than 1,000 vessels affected in Sudan and the Red Sea in 2025, and 5,655 flights spoofed in the Nicosia FIR in July to August 2024.

Why Modern GPS Threats Demand More Than Anti-Jamming

For a long time, disruption was often discussed mainly as a jamming problem. However, attack methods evolve, and disruption can now shift rapidly between jamming and spoofing. These are not the same threat; jamming tries to deny the signal, whereas spoofing tries to deceive the receiver. A solution that focuses only on anti-jamming may still struggle when an attack alternates between blocking the signal and imitating it.

That’s why resilience has to be about more than signal protection alone. It has to be about maintaining a trusted source of positioning even when GNSS is disrupted, denied, or manipulated.

What A-PNT actually means

A-PNT is best understood as a resilience approach, not a single technology. As the infographic sets out, it adds redundancy, position cross checking, trusted timing, and operational continuity when GNSS alone cannot be relied on. A-PNT isn’t “one new signal replacing GPS”; it’s a broader strategy for reducing dependence on a single vulnerable source.

GPS alone is no longer enough; A-PNT solutions infographic

Where Iridium PNT fits

Iridium PNT supports a broader A-PNT strategy by providing a completely separate source of PNT, with zero dependence on GPS. That independence is critical. If a backup still depends on GNSS somewhere in the chain, it may inherit the same vulnerability. Iridium PNT operates as a wholly separate system, giving operators an independent source they can use when GNSS is disrupted or cannot be trusted.

It also brings a major security advantage: the signal is encrypted and authenticated. In practical terms, that makes spoofing exceptionally difficult, because the receiver is not simply accepting any signal that appears plausible; it’s validating a trusted source.

Why that matters operationally

In the real world, navigation resilience is defined by whether a system continues to deliver trusted data under pressure. That means operators need more than a warning that GNSS has been compromised. They need:

  • An independent source of position and timing
  • Confidence that the source itself is trusted
  • Continuity if infrastructure is damaged
  • A solution that works with the systems they already use.

RockFLEET Assured is designed for use in contested and high risk operating environments. Its architecture supports flexible installation, with compute power in the above-deck unit and cable runs of up to 100 meters. That means the unit can be installed away from the bridge in a discreet location, making it harder to identify, harder to interfere with, and harder to damage deliberately.

If a cable is cut or a unit is attacked, what matters next is whether visibility disappears immediately. RockFLEET Assured includes backup battery capability, allowing continued transmission for a period even after cable loss. That additional continuity can matter enormously in a live incident. Even limited continued reporting can preserve situational awareness, support response, and reduce the risk of going blind at the worst possible moment.

Another key strength is flexibility. RockFLEET Assured can be used with our chartplotter, with a customer’s own chartplotter, or integrated into wider bridge systems. That gives operators multiple options to adopt resilient PNT capability without being forced into a rigid operational model. For many customers, the easier a resilient system is to integrate into existing workflows, the more likely it is to be deployed effectively and trusted by crews.

GPS still plays a vital role, but it is no longer enough on its own

As jamming and spoofing attacks become more sophisticated, more deceptive, and more hostile, operators need more than detection. They need trusted alternatives, genuine independence from GNSS, interoperability with existing systems, and resilience that holds up in the real world.

A-PNT helps by reducing dependence on one vulnerable source. Iridium PNT strengthens that approach by providing an encrypted, authenticated, wholly separate, satellite-based PNT capability, and RockFLEET Assured makes that capability operationally useful: survivable, flexible, interoperable, and built for continuity under pressure. That is what modern navigational resilience looks like.

Talk to us about resilience in practice

To explore how RockFLEET Assured can strengthen navigation resilience in your existing bridge environment, get in touch with our team.

Complete the form, or email hello@groundcontrol.com, and we will respond within one working day.

Name
Privacy Policy

Water quality monitoring no longer sits at the edge of operational strategy. It’s at the center of regulatory exposure, public reporting, and engineering accountability.

Designing or managing remote water quality monitoring systems lays the foundation for data continuity, defensible timestamps, and structured reporting outputs that withstand regulatory scrutiny. Across the UK, the United States, and other regulated markets, compliance expectations are tightening. Monitoring systems must now deliver continuous data, auditable records, and structured exports suitable for regulator portals and public dashboards.

Satellite IoT plays a defined role in meeting this regulatory need. The right architecture for the job reduces reliance on intermittent or patchy cellular coverage and strengthens confidence in the data transfer. The result is not simply connectivity; it’s system resilience and credibility.

Here we explore two remote water monitoring examples, and what they show about building confidence in the audit trail that follows.

How Regulatory Pressure is Reshaping Monitoring Design

In the UK, the Environment Act 2021 introduced statutory duties around monitoring upstream and downstream of storm overflows and sewage disposal works (Section 82). The UK’s storm overflow policy guidance outlines expectations for monitoring and transparency. Following on in 2023, environmental penalties in the UK were uncapped, removing the previous £250,000 ceiling for serious breaches. Enforcement activity has since reflected this increased accountability.

In the United States, the Clean Water Act operates through the National Pollutant Discharge Elimination System (NPDES). Submitting Discharge Monitoring Reports (DMRs), and reporting violations contribute to the Significant Noncompliance status. In summary, regulatory frameworks are established; what continues to evolve is their technical implication.

Water 1 Water Regulation Timeline

How Water Monitoring Systems Support Regulatory Standards

Water-Icon-1Continuous Data capture

Continuous or near-continuous data capture

Sensors record measurements at defined intervals, creating a consistent stream of operational data.

Water-Icon-2

Time stamped, immutable records

Each measurement is stored with a secure timestamp, preserving a verifiable historical record.

Water-Icon-3 Documented behaviour record

Documented uptime and connectivity behavior

The system logs device status and communication performance across the monitoring lifecycle.

Water-Icon-4 Secure Data Retention

Clear traceability and secure data retention

Collected telemetry is stored in protected systems designed to preserve operational and compliance records.

Water-Icon-5 Technical integration

API export capability for regulator integration

Monitoring data can be programmatically exported into reporting, compliance, and regulatory systems.

Why Connectivity Determines Confidence

Many remote river gauges, reservoirs, and discharge sites sit outside reliable cellular coverage. Even where coverage exists, service continuity can degrade during extreme weather, power disruption, or infrastructure failure. Total reliance on cellular connectivity introduces exposure.

Satellite IoT addresses this constraint directly. Low Earth Orbit (LEO) networks provide global coverage without dependence on local infrastructure. While satellite is not the right fit for every data profile, it offers coverage certainty where terrestrial networks cannot.

For message-based telemetry, Iridium Short Burst Data (SBD) supports low latency, small payload messaging suited to alarms, status updates, and exception-based reporting. That makes it particularly relevant where compliance-related events need to be captured and transmitted reliably from remote locations.

In practice, resilient remote monitoring often combines connectivity approaches to balance immediacy, scale, and power constraints. The examples below show what that can look like in water utility operations.

RockBLOCK-RTU-Diagram-Reservoir-Case-Study

Case Study 1: Reservoir Monitoring and Remote Pump Control

 

The first example involves a remote reservoir that requires dependable monitoring and controlled pump activation despite unreliable cellular coverage. Two RockBLOCK RTUs were installed.

The upper unit measures water level and flow. It operates outside cellular range and uses Iridium SBD to transmit short command and status messages. When the water level is sufficient, it signals the lower RTU to activate the pump.

The lower RTU actuates the pump and sends a periodic cellular heartbeat to confirm system availability, providing near-real time confirmation of upstream conditions, controlled pump activation, documented event timestamps, and independent verification of site status.

From a compliance perspective, the Cloudloop platform retains a time sequenced record of level measurement, command transmission, pump activation, and heartbeat confirmation. Therefore, if questioned, the operational timeline can be reconstructed.

RockBLOCK-RTU-Diagram-River-Level-Velocity-Case-Study

Case Study 2: River Health Monitoring With Micro Data Logging

 

In our second deployment example, a local water authority needed to measure river level and velocity, derive discharge, and capture core water quality indicators. Rather than installing a full stand-alone data logger with integrated satellite comms, RockBLOCK RTU’s micro data logging capability was used to capture essential metrics.

Thresholds were configured so sudden turbidity spikes or abnormal conductivity shifts triggered alerts. Measurements flowed directly into the connected software via Cloudloop API integration.

This approach provided continuous, time-stamped records, exception alerts to support rapid investigation, structured export into mapping and reporting tools, and reduced integration overhead. For remote water monitoring more broadly, logging infrastructure and communications layers remain unified rather than fragmented across separate systems.

Building an Auditable Data Pathway

Confidence in remote water quality monitoring doesn’t come from a single device, but from the integrity of the whole data pathway. When time stamps are preserved across each layer, transmissions are acknowledged, and configuration changes are logged, reliance on manual consolidation falls, reducing errors and saving both time and money.

Diagram showing the data pathway facilitated by RockBLOCK RTU

 

Auditability Across the Monitoring Chain

Within the layered architecture described above, the platform layer is where telemetry becomes a structured operational record.

Cloudloop Data provides the ingestion and decoding layer between satellite transmission and operational systems. Messages received from RockBLOCK RTU are converted into readable sensor values, normalized, time stamped, and made available through a secure portal or API.

This removes the need to manage raw payload parsing internally and helps ensure each transmission is logged with the metadata needed for traceability, including device identity, transmission time, and delivery status.

In the reservoir monitoring example, level measurements, command triggers, and pump activation confirmations are preserved as a time-sequenced operational record.

In the river health deployment, turbidity and conductivity alerts are decoded and logged with consistent metadata before export into reporting and GIS tools.

Cloudloop Data Decoded Screenshot

Visualizing and Integrating Monitoring Data

Cloudloop Insights builds on that structured data foundation by providing visualization, threshold configuration, and remote device control. Dashboards show both live and historical values, while threshold breaches, device status changes, and configuration updates are retained as part of the operational record, helping link system behavior back to defined monitoring parameters.

Both Cloudloop Data and Cloudloop Insights expose APIs, allowing telemetry and control data to flow into regulator submission tools, GIS environments, enterprise asset management systems, and custom EMS platforms. This API-first approach supports automated export for NPDES or UK reporting workflows, structured integration with mapping systems, programmatic access to historical telemetry, and closer alignment between remote measurement and institutional record-keeping.

Cloudloop Data Insights Screenshot

As remote water quality monitoring comes under greater regulatory scrutiny and public visibility, monitoring systems need to support continuous measurement, structured reporting, and reconstructable data lineage across distributed, infrastructure poor environments. Together, these examples show how message-based satellite telemetry, edge logging, and structured platform integration can support compliance-grade monitoring.

Can we help?

If you are reviewing or upgrading a remote monitoring architecture, our Technical Solutions team can help assess site conditions, regulatory obligations, sensor requirements, latency needs, and audit trail completeness.

Complete the form or email hello@groundcontrol.com and we’ll get back to you within one working day.

Name
Privacy Policy

Given that 90% of international trade is carried by sea, maritime safety is fundamental. For most of the modern maritime era, the formula was relatively simple: assess the route, understand the threat environment, adapt operating procedures, and, when justified by risk, place experienced personnel on board to deter, respond, and protect. That approach still matters, but it is no longer sufficient.

Today, merchant shipping is delegating a far broader range of responsibilities to private maritime security companies (PMSCs). The remit is no longer limited to protection from physical threats; increasingly, it also includes support for the operational risks created by disruption to critical onboard systems. One of the clearest and fastest growing examples is navigation resilience.

For maritime security providers supporting secure fleet operations, advising owners and operators, and delivering risk-managed transit, this change is already taking shape in practice. Clients may not use the term “A-PNT” (Assured Position, Navigation, and Timing), and they may not explicitly ask for “navigation resilience”. But the expectation is there nonetheless: in the questions they raise, the incident reporting they request, and the operational standards they increasingly assume are in place. The reason is straightforward: when positioning fails at sea, it becomes a security issue whether anyone labels it that way or not.

Security Has Expanded Beyond the Physical

The biggest misconception in maritime security right now is thinking this is a niche technical issue, something for bridge teams, electronics specialists, or a ship’s IT provider, when in practice, it has become a frontline operational risk.

GNSS disruption, jamming, and spoofing are no longer rare anomalies confined to active conflict zones. Independent analysis by C4ADS has documented widespread maritime spoofing events affecting thousands of vessels, particularly in the Black Sea and the Middle East. Subsequent advisories from the U.S. Coast Guard Navigation Center (NAVCEN) and UK Maritime Trade Operations (UKMTO) have continued to warn of GPS interference affecting commercial traffic in multiple regions.

When a vessel loses trustworthy position and timing, the impact cascades fast. Routing decisions become uncertain, safety margins shrink, bridge teams hesitate, and in high consequence waters, uncertainty becomes vulnerability. That’s why navigation resilience is moving into the security deliverables category. Not because it’s a buzzword, but because the outcomes are security outcomes – the ability to maintain control, continuity, and confidence in the vessel’s movements. And, as shipping companies continue to lean on third party providers to manage risk, the responsibility is naturally shifting to the people who already own the security mission.

 

Merchant Shipping Is Outsourcing Resilience, Not Just Risk

The International Maritime Organization (IMO) has formally recognized navigation systems as part of a vessel’s cyber risk surface. U.S. Department of Transportation reporting on Complementary PNT strategies has likewise acknowledged the vulnerability of civil GPS and the need for resilient alternatives. At the same time, the operational picture has become harder to ignore. From spoofed coordinates linked to tanker incidents, to cargo vessels disappearing from satellite tracking under jamming conditions, interference with positioning and navigation is now a live operational issue.

That has direct implications for maritime security. It is no longer enough for PMSCs to track piracy patterns and regional instability. They now have to understand electronic disruption, degraded communications, cyber-enabled interference, and deliberate manipulation of navigation systems. The threat landscape is no longer confined to the physical domain; it now extends into the systems vessels rely on to operate safely.

GPS Jamming map

For shipping companies, the response is familiar. When risk grows faster than internal capacity, they outsource. First that meant physical protection. Then it meant intelligence and route advisory. Now it increasingly means outsourcing resilience, especially where failure has immediate operational consequences. Navigation is one of the clearest examples.

The New Scope of PMSCs

PMSCs are increasingly being drawn into questions that would once have remained strictly on the bridge:

  1. What happens if GNSS becomes unreliable mid-transit?
  2. How quickly can we detect spoofing versus simple signal loss?
  3. How do we keep the bridge team confident in the vessel’s position when the primary reference is compromised?
  4. What proof can we provide after the fact – to the owner, to insurers, to regulators, and to internal stakeholders – that the vessel maintained safe navigation?

 

These are no longer theoretical or hypothetical concerns and possibilities; they’re operational questions and sit directly inside the modern security mission. Marine insurers and P&I clubs such as Allianz and Gard have already published guidance highlighting navigation system vulnerabilities as emerging operational risks. The Nautical Institute’s Mariners’ Alerting and Reporting Scheme (MARS) has also captured incident reports reflecting confusion and degraded situational awareness linked to navigation system anomalies. In many cases, the crew onboard is highly competent but not equipped with the tools or the time to manage GNSS integrity issues in a repeatable way. But it’s not a training failure – it’s an equipment and process gap.

 

Why A-PNT Is Becoming the Navigational Standard

The real challenge in modern navigation is not only loss of signal, but loss of trust. In a disrupted environment, the greatest risk is often not that positioning disappears, but that it appears reliable when it is in fact wrong. That is what spoofing does, and it turns navigation failure into an operational and security problem.

That’s why A-PNT is becoming increasingly important. It’s often not a single product or platform, but a broader resilience approach: ensuring that positioning, navigation, and timing remain dependable and verifiable when GNSS is degraded, denied, or manipulated.

Solutions such as Iridium PNT sit within that broader picture. They offer an additional means of maintaining trusted PNT in operating environments where traditional GNSS may be vulnerable to interference.

For maritime operators, that is the real shift. A-PNT is becoming less of a specialist capability and more of an operational standard, because resilience in navigation is increasingly inseparable from resilience in the voyage itself.

GPS-RockFLEET-Assured-Diagram

Where RockFLEET Assured Fits into Modern Maritime Security

RockFLEET Assured, powered by Iridium PNT, enters the market at a moment when PMSCs are increasingly expected to provide resilience as part of secure fleet operations, not just protection from physical threats.

 

Designed specifically for maritime deployment, the marine-grade smart antenna delivers cryptographically authenticated positioning and an assured navigation reference for vessels operating in environments where GNSS integrity cannot be guaranteed. In practice, that means an independent source of trusted position data when GPS or other GNSS signals are degraded, denied, or manipulated.

Its value is operational as much as technical. By comparing GNSS inputs with Iridium PNT outputs, RockFLEET Assured helps bridge teams and shore-based personnel identify anomalies more quickly, detect possible spoofing or jamming, and respond with greater confidence. Event data can be logged and transmitted ashore, creating a defensible record for incident review, compliance documentation, or insurer scrutiny.

RockFLEET-Assured-Installation-Transparent-BG

Just as importantly, it’s built for repeatable deployment at fleet level. The system is delivered as a single above-deck terminal, with mounting options to suit different vessel types and superstructure layouts, reducing the need for vessel-by-vessel customization. Its IP66-rated enclosure is designed for exposed marine conditions, and no below-deck electronics are required unless bridge view is selected.

RockFLEET-Assured-on-Map-Plotter

Reporting is equally flexible. Through Iridium Messaging Transport (IMT), RockFLEET Assured supports configurable position updates and secure two way messaging between ship and shore, with reporting intervals tailored to different operational requirements. Integration with Ground Control’s Cloudloop platform enables centralized fleet visibility, while API connectivity supports incorporation into existing monitoring and security systems.

Optional bridge view functionality adds another practical advantage, allowing assured positioning data to be displayed alongside standard GNSS outputs. For crews, that provides a clearer visual reference during interference events and helps reduce hesitation when rapid navigational decisions are required.

For PMSCs, that makes RockFLEET Assured a practical way to embed navigation resilience into a broader security offering. Rather than treating disruption as a vague technical failure, it helps turn it into something observable, reportable, and manageable.

What Changes Operationally for PMSCs

PMSCs often operate under heightened expectations for compliance, documentation, and professionalism. Clients – corporate security teams, fleet operators, insurers, charterers – expect measurable capability, not procedural reassurance. Without A-PNT, disruption remains ambiguous. With it, disruption becomes detectable, documentable, and defensible. That shift strengthens operational reporting, reduces decision latency on the bridge, and improves client confidence. It becomes a deliverable in a security modeland part of how PMSCs define secure fleet operations in 2026 and beyond.

The Future of Maritime Security

The maritime security industry is not abandoning its roots: physical threats still exist, high-risk areas still demand proven experience, and human expertise still matters. But the center of gravity is shifting as electronic disruption, contested signal environments, and hybrid risk become normalized features of global shipping lanes. International policy bodies, insurers, and national governments have all acknowledged this reality in recent years.

The most forward-looking maritime security providers are therefore evolving from personnel-based security offerings to layered security and resilience platforms. They are expanding into technical advisory, electronic threat awareness, and operational continuity support. They are positioning themselves as secure fleet partners, not just voyage contractors, and A-PNT is one of the cleanest, most valuable additions to that stack.

The next era of maritime security will be defined by who can keep ships operating safely and confidently when the environment becomes contested physically, electronically, and operationally. Navigation resilience is becoming a security standard because disruption is becoming the norm, so for PMSCs responsible for secure fleet operations, this is the moment to lead. The companies that adopt assured A-PNT now through solutions like RockFLEET Assured will be the ones positioned to define what security means at sea over the coming years or more.

Trusted A-PNT For Navigational Certainty at Sea

For over 20 years, we’ve delivered resilient satellite solutions for remote connectivity and secure communications. We’re proud to support commercial shipping, offshore operators, and maritime security providers with dependable satellite connectivity and assured positioning capabilities designed for the realities of the modern maritime domain.

If you want to offer A-PNT solutions as part of the security strategy for your maritime clients, complete the form, or email hello@groundcontrol.com and we’ll reply within one working day.

Name
Privacy Policy

Operating beyond cellular coverage is a reality for many ArduPilot-powered vehicles, and satellite is often the only practical backhaul. Recently, the ArduPilot Development Team (with support from Ground Control) documented MAVLink telemetry performance over Iridium Certus 100 using the RockREMOTE UAV OEM modem. The goal: understand what usable telemetry looks like over a real satellite link, and share configuration guidance the community can replicate.

RockREMOTE UAV OEM is a low SWaP Certus modem designed for OEM integration on unmanned aircraft. It provides an IP data path for BVLOS command and control (C2), MAVLink telemetry, and payload/edge networking when terrestrial backhaul isn’t available.

What the ArduPilot testing looked at

The work, conducted and written up by Stephen Dade, a member of ArduPilot’s Development Team, evaluated MAVLink telemetry reliability and latency over Iridium Certus 100 across multiple connection configurations. In the reported results, MAVLink telemetry was usable and consistent when configured appropriately, with typical round trip latency reported in the sub-2 second range.

Key takeaways from the write-up

  • Reliable MAVLink telemetry over Certus 100 (with the right config): The test summary reported measured latencies roughly in the ~600-1600 ms range, with broader observed ranges depending on protocol and conditions.
  • Stream rates matter under uplink constraints: The write-up notes Certus 100 uplink limits and recommends configuring ArduPilot stream rates around 2 Hz to stay within available throughput.
  • Secure connectivity works best when designed for satellite: High latency/low bandwidth links benefit from VPN patterns optimized for satellite. Ground Control supports architectures that terminate secure tunneling at the gateway, and offers WireGuard where on-link VPN is required.
  • Installation can make or break performance: Antenna placement and local obstructions (trees/buildings) had a major impact. Roof height mounting improved results versus a low height suburban placement.
RR-UAV-OEM-being-tested-with-ArduPilot

Why this matters for integrators

The documented setup used a representative unmanned systems stack (ArduPilot flight controller, Ethernet bridging, and ground endpoint infrastructure), and the notes are practical for anyone building satellite-enabled autonomy: you can design for predictable behavior, but you have to design around constraints like latency, throughput, and installation quality.

“These results help validate a satellite telemetry approach that can extend operations into truly remote areas, and provide the community with clear configuration guidance.” – Stephen Dade, ArduPilot Development Team

“What’s exciting here is giving builders real architectural choice: lean messaging for efficient telemetry, and IP connectivity when an uninterrupted C2 link matters.” – Alastair MacLeod, Ground Control CEO

Read the full technical write-up

The full methodology, recommended configurations, and measured performance data are here: https://discuss.ardupilot.org/t/ardupilot-and-the-iridium-certus-satellite-service.

Need help integrating Certus100 with ArduPilot?

If you’re looking at satellite-enabled ArduPilot telemetry using Iridium Certus 100, including hardware availability and integration guidance, we’re here to help.

Complete the form, or email hello@groundcontrol.com, and we’ll connect you with our drone specialists within one working day.

Name
Privacy Policy