Skip navigation

RhoMobile Blogs

3 Posts authored by: Michael Toews

RhoHub upgrade

Posted by Michael Toews May 21, 2014

Our RhoHub service has been upgraded to now include the Rhodes versions listed below!

 

3.4.2.1 – WinMobile, Win32. No Android or iOS for this version!

3.5.1.14 – WinMobile,Win32,Android,iOS.

4.0.9 – WinMobile,Win32,Android,iOS.

4.1.6 – WinMobile,Win32,Android,iOS.

 

Also, iOS builds are switched to iphone7.1 sdk.

These versions of Rhodes/RhoElements are the only versions that are currently supported.

 

NOTE: Support for Rhodes 3.4.2.1 will be removed at the end of June.

 

Moving forward, we will be continuing to support a total of three versions of Rhodes / RhoElements on RhoHub, that is, the most current version and two prior versions. We will also be posting a RhoHub support matrix showing the versions of Rhodes and the platforms that they can build for as well as an estimated end of life for support of each version on RhoHub.

Using the Rhodes Sensor API

 

 

 

Among the numerous APIs that Rhodes exposes for your device, lies the Sensor API which you can utilize to connect to, and read from, the many different sensors that your device(s) has. In this post I'll be demonstrating the use of our Sensor API by using the accelerometer on a device to detect a device shake. Once the shake is detected, I'll navigate to a page that would house a signature capture box which can be used and then the signature can be accepted by either a button click or an additional shake of the device.

 

I will also be using a third party tool called Bootstrap, which you may have seen in other blog posts and tutorials around Rho products. Although the use of Bootstrap is optional, I highly recommend its usage as it makes design and layout a breeze.

 

If you'd like to download this app to follow along simply clone this repo https://github.com/rhomobile/sensorapi_example.git.

 

git clone https://github.com/rhomobile/sensorapi_example.git

 

That having been said, let's get started!

 

 

A Note About the Sensor API

While there are many different sensors exposed through the Sensor API, not all sensors will work with all devices. For example, a Samsung Galaxy S4 has an ambient temperature sensor (hardware) built into the phone so I could use the ambient temperature sensor on the Sensor API to tell the ambient temperature in the area. This will not work on an Apple iPhone 5S however since it does not have the hardware necessary to make this API usable. To fully understand which sensors your device has, you must research your specific device for a list of available sensor hardware.

 

This also holds true for the software components on your device. For example, the temperature sensor for Android was introduced in Android API level 14 so, if you are building your app with a version lower than this, you will not be able to use the temperature sensor even if you have the right hardware. Before using the Sensor API it is important to know the limitations of your device's hardware AND software so that your hard work does not end in frustration.

 

Once you have researched the capabilities of your device, you'll want to check out our documentation on the Sensor API here. In this documentation you'll see how this API is implemented in either Ruby or JavaScript and the different sensors that you can read from. For this example app I am using the accelerometer sensor since almost every device on the market today will have an accelerometer and the necessary software to listen to it.

 

Step 1 - Defining the Sensor

Before using the Sensor API, you have to create a Sensor object of a specific type. Each type of sensor will have different attributes depending on the type of sensor. You do this by calling the Sensor.makeSensorByType() method. This method takes a single parameter to describe the sensor that you are trying to use. Typically you will use one of the SENSOR_TYPE constants for this parameter value. If the sensor does not exist, then it will return null. Otherwise you will have a Rho.Sensor object and will be able to use the methods and properties associated with this API.

 

Step 2 - Getting Sensor Information

Now that you have a Sensor defined you will want to get values from the sensor. This is accomplished in two ways.


A) Reading values - To read values synchronously, you can use the Sensor.readData method. This method will return an object that will contain properties representing values for that sensor. The property names will depend on the type of sensor you have created. For example

 

myTempSensor = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_TEMPERATURE);
myTempData = myTempSensor.readData();
if (myTempData.status =='ok')
     {
     currentTemp = myTempData.temperature_value









 

 

B) Asynchronous Callback method - In most cases you want to continuously get values from the sensor. To do this you will setup an asynchronous callback when you call the Sensor.start method. This callback method will be called at a specified interval. This interval is controlled by the Sensor.minimumGap property. By default the interval is 200 milliseconds. For example, the code snippet below defines an ambient light sensor which will poll every 200 milliseconds and a temperature sensor which will poll every 10 minutes:

 

myTempSensor = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_TEMPERATURE);
myTempSensor.minimumGap = 10000 ; //10 minutes
myTempSensor.start(tempCallback);

myLightSensor = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_AMBIENT_LIGHT);
myLightSensor.minimumGap = 200 ; //This is default
myLightSensor.start(lightCallback);









 

Step 3 - Handling Results

Different sensors will contain different properties in the callback object. For example, the accelerometer will return accelerometer_x, accelerometer_y, and accelerometer_z, (all float type) whereas the temperature sensor will return temperature_value (string). To see all the different values that are returned see the callback tab start() method in the Sensor API. Note that there is a property 'status' and 'message' to indicate if there was an error in getting the values.

Screen Shot 2014-01-17 at 10.15.42 AM.jpg

 

Defining Shake Detection

Since I would like to reuse the ability to detect shake events, I am going to create my own a 'shake' api/object that will expose two methods:


  1. startWatch (onShake) - method to start detecting shakes which takes a parameter that defines the callback function for the shake event
  2. stopWatch - a method to stop listening for shake events.


The code below is the complete shake object which we will now breakdown and explain.

 

/public/js/application.js

  // Define shake detecting JS
  var shake = (function () {
  var shake = {},
  watchId = null,
  options = { "minimumGap": "300" },
  previousAcceleration = { x: null, y: null, z: null },
  shakeCallBack = null;

  // Start watching the accelerometer for a shake gesture
  shake.startWatch = function (onShake) {
    if (onShake) {
      shakeCallBack = onShake;
    }
    watchId = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_ACCELEROMETER);
    if (watchId !== null) {
      watchId.setProperties(options);
      console.log('starting detection');

      watchId.start(assessCurrentAcceleration);
    }
    else
    {
      handleError();
    }
  };

    // Stop watching the accelerometer for a shake gesture
    shake.stopWatch = function () {
      if (watchId !== null) {
        console.log('stopping detection');

        watchId.stop();
        watchId = null;
      }
    };

  // Assess the current acceleration parameters to determine a shake
  function assessCurrentAcceleration(acceleration) {
    var accelerationChange = {};
    if (previousAcceleration.x !== null) {
      accelerationChange.x = Math.abs(previousAcceleration.x, acceleration.accelerometer_x);
      accelerationChange.y = Math.abs(previousAcceleration.y, acceleration.accelerometer_y);
      accelerationChange.z = Math.abs(previousAcceleration.z, acceleration.accelerometer_z);
    }
    // console.log('movement detected:' + (accelerationChange.x + accelerationChange.y + accelerationChange.z).toString());
    if (accelerationChange.x + accelerationChange.y + accelerationChange.z > 30) {
      // Shake detected
      console.log('shake detected');

      if (typeof (shakeCallBack) === "function") {
        shakeCallBack();
      }
      shake.stopWatch();
      setTimeout(shake.startWatch, 1000);
      previousAcceleration = {
        x: null,
        y: null,
        z: null
      }
    } else {
      previousAcceleration = {
        x: acceleration.accelerometer_x,
        y: acceleration.accelerometer_y,
        z: acceleration.accelerometer_z
      }
    }
  }

  // Handle errors here
  function handleError() {
  }

  return shake;
  })();



















 

Helper Variables

  var shake = {},
  watchId = null,
  options = { "minimumGap": "300" },
  previousAcceleration = { x: null, y: null, z: null },
  shakeCallBack = null;









We start by defining variables for the different parts of the sensor that we need to track. Most will start as null or blank.

     - shake - The object that contains all the information about the shake event.

     - watchID - The variable that will refer to the sensor itself.

     - options - A hash of options to pass to the Sensor API telling it how we want to see the data. We set minimumGap here which is how long the API will wait, in milliseconds between taking readings.

     - previousAcceleration - Used to calculate acceleration changes.

     - shakeCallBack - A reference to our shake event callback function.

 

shake.startWatch Method

shake.startWatch = function (onShake) {
  if (onShake) {
    shakeCallBack = onShake;
  }
  watchId = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_ACCELEROMETER);
  if (watchId !== null) {
    watchId.setProperties(options);
    console.log('starting detection');


    watchId.start(assessCurrentAcceleration);
  }
  else
  {
    handleError();
  }
};











 

This is the function that starts the detection for shakes. It creates an anonymous function that accepts a callback function 'onshake' as its parameter. It then sets the sensor from which it would like to gather readings. Here is where we see the first explicit use of the Sensor API:


watchId = Rho.Sensor.makeSensorByType(Rho.Sensor.SENSOR_TYPE_ACCELEROMETER);

 

watchid will now be a Rho.Sensor object and specifically an Accelerometer sensor. We then check to make sure the device supports this by making sure it is not null. If it is was successful in creating the sensor, then we can then configure it as well as start detecting  events. The setProperties() method,  exposed from our Sensor API, is used to configure a group of properties all in one call. Here, all we are doing is setting the minimumGap property to be 300 ms. We could of equally accomplished this by just doing watchId.minimumGap = options.minimumGap.

 

We are now ready to enable the accelerometer by using the Sensor.start() method. We want to continually monitor Accelerometer events so we will pass in a callback function so we can handle the data appropriately:

 

watchId.start(assessCurrentAccleration);









watchId.start(assessCurrentAcceleration);

watchId.start(assessCurrentAcceleration);

Handling Accelerometer Events

// Assess the current acceleration parameters to determine a shake
function assessCurrentAcceleration(acceleration) {
  var accelerationChange = {};
  if (previousAcceleration.x !== null) {
    accelerationChange.x = Math.abs(previousAcceleration.x, acceleration.accelerometer_x);
    accelerationChange.y = Math.abs(previousAcceleration.y, acceleration.accelerometer_y);
    accelerationChange.z = Math.abs(previousAcceleration.z, acceleration.accelerometer_z);
  }
  // console.log('movement detected:' + (accelerationChange.x + accelerationChange.y + accelerationChange.z).toString());
  if (accelerationChange.x + accelerationChange.y + accelerationChange.z > 30) {
    // Shake detected
    console.log('shake detected');
    if (typeof (shakeCallBack) === "function") {
      shakeCallBack();
    }
    shake.stopWatch();
    setTimeout(shake.startWatch, 1000);
    previousAcceleration = {
      x: null,
      y: null,
      z: null
    }
  } else {
    previousAcceleration = {
      x: acceleration.accelerometer_x,
      y: acceleration.accelerometer_y,
      z: acceleration.accelerometer_z
    }
  }
}











 

This is the function that we use to calculate the acceleration to see if the device is being shaken or not. Using watchId.start(assessCurrentAcceleration); the information from the sensor is passed to the assessCurrentAcceleration() function. The 'acceleration' parameter will be the callback object that is returned from the Sensor.start method. It will contain properties:accelerometer_x, accelerometer_y and accelerometer_z that we will perform some math on to determine if there was a shake or not. We have determined that a difference of 30 or more is detected is suitable for a typical shake. But you may want to play around with this value or better yet expose this as a configurable item in the 'shake' object to set the sensitivity.  We then initiate a call to the callback function which is referenced by shakeCallBack which will in turn call our callback that we define later in the tutorial. After calling the callback we stop the shake detection by using the Sensor.stop() API call and wait for a full second (1000ms) before once again starting the shake detection, setTimeout(shake.startWatch, 1000); and nullifying our x, y, and z vars.

 

 

Application Start, Stop, and Callback Functions

 

/public/js/application.js

// Define Functions that work with shake detection
  function start(){
  shake.startWatch(myShakeCallback);
  }

  function stop(){
  shake.stopWatch();
  }











 

For our demo application, I defined a start() function which calls shake.startWatch() passing myShakeCallback as the callback function for the shake event. I also defined a stop() function which simply calls shake.stopWatch(); to stop shake detection.

 

Now that we can start and stop shake detection, we need to do something with the fact that we detected a shake.

 

myShakeCallback = function() {
  if(currentPage == "package.html")
    currentPage = "sign.html";
 else{
    currentPage = "package.html";
    // Capture Signature
  }
  $(".page").load(currentPage, function(){Pace.stop();});
}











 

As you can see, the callback function will detect which page it is on using a global JS variable. Since I only have two pages, it is simple enough to keep track of the current page in this fashion.

 

Incorporating Shake Detection Into Your App

Now that I have defined shake detection I need to actually use it in my app. Let's define an index page to start off with. In my index page I need to include JS and CSS sources in order to take advantage of the Rho JS API and the Bootstrap components.

 

/public/views/index.html

 <!-- This page serves as the traditional layout for a JS app -->
  <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN"
  "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
  <html xmlns="http://www.w3.org/1999/xhtml">

  <head>
  <!-- Load JS Libraries -->
  <script type="text/javascript" src="/public/jquery/jquery-1.9.1.min.js"></script>
  <script type="text/javascript" src="/public/js/pace.js"></script>
  <script type="text/javascript" charset="utf-8" src="/public/api/rhoapi-modules.js"></script>
  <script type="text/javascript" src="/public/js/application.js"></script>

  <!-- Load CSS Libraries -->
  <link rel="stylesheet" href="/public/css/bootstrap.css">
  </head>

  <div class="page">
  Choose a Package
  <!-- Define logic for multiple packages -->
  <!-- I'm using a simple example here but you can format this however you want. -->
  <ul>
    <li><a href="package.html" class='custom-link'>Package #1</a></li>
  </ul>
  </div>

  <!-- Make sure the shake detection is not started while selecting a package -->
  <script type="text/javascript">
  $(function() {
    stop();
  })
  </script>















 

You may have noticed that in my code for the index page, I have a strange class "custom-link". This is part of my single-page app design scheme using Bootstrap. I'll cover it's use later on in this post.

 

Adding Additional Views

If you start your app now, you should have a simple page with some text and a link that doesn't work. Let's make a page for that link. The use case I had in mind when designing this example was a delivery person delivering packages in extremely cold environments. Assuming the person is wearing gloves, it may be difficult or impossible to accurately press buttons on a device's screen, so I propose to give them the option instead of shaking the device. With that in mind, let's make a page with the package details.

 

/public/views/package.html


  <!-- This could be anything about the package such as delivery details.
  You could even have a picture of the recipient here if you needed to verify ID before delivery -->
  <div class="row">
  <div class="col-xs-12">
    <h1>Package #1</h1>
    Tap the button below or shake device to collect Signature.
    </br>
    <button href="sign.html" class="custom-link">Sign for Package</button>
    <button class='custom-link' href="index.html">Home</a>
  </div>
  </div>

  <!-- Start shake detection as soon as this page loads -->
  <script type="text/javascript">
  $(function() {
    start();
  })
  </script>





































As you can see, I have not filled in much of the logic or detail here because your implementation may vary greatly. Once again, we have a page, with some text and a link that doesn't work to a page called sign.html which I'll define now.

 

/public/views/sign.html

<!-- This is a page for collecting of signature of receipt of the package -->
  <div class="row">
  <div class="col-xs-12">
    <h1>Sign for Package</h1>
    Tap the button below or shake device to accept the signature and go back.
    </br>
    <button class='custom-link' href="package.html">Accept</button>
  </div>
  </div>

  <!-- Start shake detection as soon as this page loads -->
  <script type="text/javascript">
  $(function() {
    start();
  })
  </script>














 

You may also notice the bit of inline JS I have defined at the bottom of the page. This will automatically start my shake detection once this page is loaded into the DOM.

 

I'll be using an anonymous function akin to Document.ready() to make this app a single-page app. I define this in application.js as such:

 

/public/js/application.js

  // Document ready
  $(function() {
  currentPage = "package.html";
  //all links handled here
  $('body').on("click",".custom-link",function(e) {
    e.preventDefault();
    // Use Pace for a loading indicator
    Pace.start();
    var that = e.currentTarget;
    var href = $(that).attr("href");
    $(".page").load(href,function(){
      Pace.stop();
    });

    return false;
  });
  })



















 

Now you can see the use of the "custom-link" class. Here we see that when we have a link that is of the class "custom-link", the body of the current page is modified to replace its contents with those of the referenced page. Here I implement a Pace object to add a loading indicator in case of long loading times.

 

Now you should be able to launch your app and see the following:

 

Landing Page

index.html

   

After clicking "Package #1 link"

package.html

   

Shake on package.html

sign.html

   

Shake on sign.html

package.html

   

 

 

At this point in your app development cycle you would add your logic and stylings to make this app your own. Using Bootstrap it's very easy to make your app look extremely professional in much less time than you might think. Once again if you have never used Bootstrap, I highly recommend looking into it for your web design needs.

 

I hope this post has given you a bit more insight into our Sensor API and that it will help you understand more what you'll need to do / use in order to get your app to where you want it.

Using the Samsung Remote Test Lab you can easily see what your app will look like on any of the available Samsung devices offered on the Remote Test Lab site.

 

Getting started with the Samsung RTL

 

To use this service you will have to have a Samsung developer account but don't worry if you don't, it's free to sign up and there are no fees or charges, it's completely free! Simply go to the Samsung Remote Test Lab Site and sign up. Once you are signed up (or signed in). You'll see this screen with the devices you are allowed to access. Also notice that you have a certain number of credits allotted to you. The way these work is that each hour you want to reserve a device for costs 4 credits. You are allotted a certain number per day to use and once you go through them all, you are not allowed to reserve anymore devices until the next day.

 

 

Browser Requirements

 

Once you are here, you can see if your browser meets the requirements to run the remote devices. If your browser meets the requirements, you are free to start using the service. However if your browser does not meet the requirements, there are steps you must take before using the remote devices and luckily, samsung provides links to where you can get the necessary pieces to get up and running. Here is what each scenario looks like.

 

NOTE: You may have to click the Details drop-down menu to see exactly why your browser is failing this test.

 

Passing Browser

 

Failing Browser

 

The problem that existed in the failure here is that there is no 64-bit version of Google Chrome for the Mac and the required java doesn't run on 32-bit browsers. If you are running the RTL on a mac, your best bet is to use Safari. On Windows, IEv7+ is sufficient.

 

Choosing a Device

 

Now that you have your Samsung developer account and you have verified that your browser is sufficient to run the remote test devices, it's time to choose which device you are going to test on. You must chose the Android release, the Device Model, and for how long you want to reserve the device. For this example I chose a Galaxy Note 8.0 running Android 4.2.2. After selecting all the attributes for your virtual device, you will be prompted to confirm them and start the reservation.

 

Using Your Remote Device

 

After you verify your remote device's settings and click start, a .jnlp file will be downloaded to your computer. Simply double click this file in your browser to start the remote device.

 

If you get a security warning, accept the terms and run the application. Once you tell the application to run it will start the remote device and initiate the loading process on your machine. You should see something like this as it loads the device screen. Once it's loaded it should simple look like the android device you chose.

 

Device LoadingDevice Loaded, Ready to Use

 

Installing Your App Onto the Remote Device.

 

Once your device is up and running you will have a fully functioning Samsung Android device that you can play around with and even install apps on in order to see how your app would look on that particular device. To install an app on the device simply right-click the device itself and click Manage -> Files. Doing this will open up the window below and allow you to transfer file to and from the device. In this example I am transfer the .apk I created in RhoStudio of our most basic rhodes app. The only change I made is that the app will display the platform on which it is running. In this image, the file is in transfer to the device.    Note: You may experience some lag when working with the remote device. This is completely normal so it's nothing to be concerned about.

 

Once your .apk is on the device you simply install it as you would any other app on an android device: Go into the file browser of the device and click the .apk file. Once your app is installed you should find it in the app drawer with all the other apps and you run it just the same: click on the app icon.

 

App in app drawerApp started on remote device

 

Disadvantages

 

Device Hardware

One downside to using the remote test lab is, while you DO have access to the hardware on the device, it may not necessarily be useful to you. For instance, below is a screenshot of the device's camera app running on the remote device. From here on these are all shown on a Samsung Galaxy S4 just for a bit of difference.

And here is what the front-facing camera sees:

 

Of course this is what is expected since we know that these are simply devices hooked up in some server farm somewhere. Keep in mind, these are not the only things that each device will see. Different device that are in different regions will have different camera angles. For instance, this one in the UK:

Which is also not really that useful but better than a black screen.

 

The accelerometer is also pretty much useless since the devices are not going to be moving.

 

Network

Besides WiFi, you are stuck with whatever service the device has access to. For a lot of these devices that means Edge or 2G. Not all of them are within 3G or 4G networks.

 

Review

All in all this is a great tool for debugging your views on different devices since applications like RhoSimulator, while most of the time are accurate, will not necessarily show you exactly what your app will look like on every device. It's also very convenient not having to clutter up your office with devices just so you can tell how your app will look or perform on a given Samsung device.

Thank you for reading my rundown on the use of the Samsung Remote Test Lab and I hope is is useful to you in your app development cycle.

Filter Blog

By date:
By tag: