responsive website maker software


The Software stack of Alpheus is built on top of the Robot Operating System (ROS) by Willow Garage. ROS is installed on top of Debian Linux operating system, running on an Intel core i7 processor. A Mini-ITX on-board computer is provided inside the pressure hull.

The software stack has been designed from scratch this year and provides the following benefits:

  • Modular design with optimal task distribution.

  • Abstract asynchronous inter-process communication mechanisms.

  • Redundancy in process life-cycles in case of crashes.

  • Shared memory system for vehicle parameter variables.

  • Improved front-end controls for easy debugging of mission.

The Robot Operating System is an industrial-grade robotics framework which provides various services and tools that significantly reduce design cycle time. The software stack of Alphues is modularized into various processes that are completely independent of each other, yet are able to communicate using an asynchronous messaging protocol.

The software subsystems of Alphues are divided as such:

  • Mission Planner
  • Motor Controller
  • Vision Server
  • Action Server/ Action Client
  • User Front End
  • Telemetry

All these systems are integrated into the ROS infrastructure in the form of nodes with asynchronous communication among them. Topics provide data communication over TCP or UDP and Services provide an XML-RPC request-response call. The software team is mainly responsible for developing software for mission planning, computer vision and active vehicle localization.


The software architecture of Alphues is divided into two parts:

A High Level Architecture which involves abstract planning algorithms like mission planners and direct waypoint navigation functions. It also includes the vision server which is responsible for image processing and object recognition on the camera images. Most of these algorithms run of the onboard computer.

A Low Level Architecture where the onboard sensors and actuators of the vehicle are interfaced to the microcontroller. The directives from the high level software are fed to the microcontroller which controls the thrusters of the vehicle.


Alphues’s architecture is a highly distributed and abstraction is achieved in the form on “nodes” with asynchronous inter-process communication mechanisms between them.

 The various subsystems are started all at once at runtime and are actively involved in asynchronous IPC once started. The system is fault tolerant with a node being immediately restarted if any system error causes its shutdown.


Alphues’s software is developed in various layers of abstraction. The low level software comprises the PID controllers, the microcontroller kernel and the communication protocols to interface the microcontroller with the on-board computer. The rest of the software is mostly the high level architecture which sends commands to the low level controllers, e.g. navigational commands. Sensor data is collected through various sensors.


Robust vehicle control is achieved in Alphues through a combination of 6 carefully Proportional Integral Derivative (PID) controllers. The autonomous operation of the AUV is brought about using set-point directives to the PID control loops. The microcontroller board is programmed with a custom kernel that constraints operating frequencies of the control loops along with loops for collecting sensor data and relaying information to the on-board computer.

Pose of the vehicle is determined from the inbound IMU data .Each PID controller maintains pose of the vehicle using set-point directives from the High Level Software. The controller computes the error in each of the Yaw, Pitch, Roll, Surge and Sway Axis. A high frequency error minimization algorithm with average weighting of output corrects the pose error to achieve the target set-point.


The high level planning software is developed using a State Machine implemented in Python using the SMACH (State Machine) library. The competition tasks are described using a set of states with a number of inputs and results associated with each state. The state machine transitions with the successful completion of a task and failure to complete a task would be logged into the system. Each state also has a time-out feature which helps to transition onto the next state in case a task is taking too long to complete.The competition tasks are divided into a set of states that are executed sequentially or iteratively.

The design of mission planners is very rapid because of the high level design approach used for developing the state machines. Also, the states of the state machines can be bundled in containers and be reused as abstract state machines inside another state machine.The main link between the High Level and the Low Level architecture of Alphues is the Action Server and the Action Client interface. The Action server is used to send goals to the Action Client which executes them until completion. Benefits of using the action sever include execution of pre-emptive goals, active goal completion feedback and fault tolerance.


The vision server of Alphues is the main system for image processing. The images are obtained using a set of two cameras onboard the vehicle, one for forward and another for bottom vision .

The image processing software employs a series of algorithms to detect and segment underwater objects. The main task of the vision server is to compute the geometrical co-ordinates of various underwater objects and relay the information back to the vehicle controllers. An example image processing pipeline to detect a colored Buoy can be summarized as follows:

We white balance the input image to improve the contrast. For providing lighting invariance we make use of an appropriate color space. This is followed by segmentation of objects in the image using color thresholding. A set of erosion and dilation filters is implemented to smoothen out the resultant binary mask. Circular Hough Transform is applied on this binary image to detect circular contours. The biggest circular contour is then selected which corresponds to the target buoy. To this image we apply cvMoments to compute the inertial center of the buoy. Finally the computed information is relayed over a ROS Topic so that other subsystems can utilize it.


The Vision Debugging Suite is a Vision Front-end that is used to dynamically view the results of the image processing pipeline. The vision server provides active controls to vary the image processing parameters using the Dynamic Reconfigure API under ROS.

The Vision Suite GUI is designed using the Qt framework. There are provisions to add various filter chains and analyze the collective effect on the input image. This is very helpful to debug image analysis errors and vary the input parameters accordingly. The result of the process in viewed in real-time.

The vision suite is extensible and new features can be added with seamless integration to the old software. The messaging system that is employed is based on ROS messages. The GUI interface can be used to change the Dynamic Reconfigurable Server parameters. Another feature provided by the Vision Suite is saving a visual feed to the disk. This feature employs rosbag services to log the data. The saved visual feed can also be played back for inspection at a later point of time.