How to code C# on a Raspberry Pi to read a Time of Flight sensor – PART 4



How to code C# on a Raspberry Pi to read a Time of Flight sensor – PART 4

How to code C# on a Raspberry Pi to read a Time of Flight sensor - PART 4

Over a series of videos, I have demonstrated how to set up a ‘headless’ (no display/keyboard/mouse) Raspberry Pi (I’m using a Raspberry Pi Zero 2 W) for .NET (Version 6) development. This is so we/you can use Visual Studio Code to write C#, target the Raspberry Pi *and* run/debug the code remotely from Windows. Whilst Visual Studio Code is now available on the Raspberry Pi directly, I’d rather develop on my Windows desktop and run/debug from there, if you want to do the same, follow along with me.

In the last video we got our GPIO up and running with an LED and a switch. Today, we’re going to interface with a device that provides distance measuring capabilities. We’ll interact with this device using the Raspberry Pi’s UART/Serial port.

Chapters
========

00:00 Journey so far (Episodes 1-3)
01:44 Introduction to TOF/UART project
02:19 Github instructions
02:44 VL53L1X Time of Flight Sensor
03:08 Waveshare Time of Flight Sensor
03:22 Wiring the sensor
04:52 Configure UART on Raspberry Pi
06:50 No permission error
07:37 Update the C# code
08:40 Run the code
09:24 Close and Thanks

Instructions and supporting files are available on GitHub here:
https://github.com/ATGNI/RaspberryPi/tree/main/Episode%204

Instructions are here:
https://github.com/ATGNI/RaspberryPi/blob/main/Episode%204/Instructions.md

The Time Of Flight (TOF) Laser Range Sensor is available here:
https://thepihut.com/products/time-of-flight-tof-laser-range-sensor

Music: ‘Macon Ga’ from Epidemic Sound
Stock Footage: StoryBlocks

Let me know if you want more Raspberry Pi episodes !

Behind The Scenes (BTS)
=====================

Parts 1 & 2 seem to have had some measure of success, although strangely, the GPIO finale in Episode 3 not so much. This video is an experiment to see if any interest in UART/Serial Ports or indeed this series. This episode has taken the best part of two weeks to construct what with all the prep work and then the B-roll filming plus the talking.

I used ShareX software to record any Windows usage, eg Visual Studio Code, Linux sessions. For the main camera, I’ve used a Panasonic S1H, mounted through an iFootage suction mount on the desk plus an articulated arm. I use a Sigma 20mm Art lens (f2.0) at 24fps 4k, set to manual focusing. The aperture set to f2.0 is probably too much with depth of focus very tight. I’ve used the Panasonic S5 with either the Pixapro Orbit600 turntable base or an iFootage Nano slider to video the Raspberry Pi and components. All components secured with blue-tack !

Audio is provided by a Rode Lavalier plugged into the S1H. I have a SmallHD monitor mounted on top facing me to help with focusing. Lighting is from one light, being reflected from a large white card. LED light is an Aputure Amaran HR672. Background lighting provided by the AtomCube RGB LED light at the back of the room providing that blue glow. No scripts, teleprompt or notes this time.

All the screens are pre-recorded, which I then play back and record video/audio on the top. I forgot I was suppose to be giving the illusion that I’m typing realtime whilst talking and looking into the camera so didn’t pull that off so well. Editing as usual in Adobe Premiere Pro. Used the audio/music ‘remix’ feature to stretch the one track from 3 mins to 8 mins.. It’s clever.

The Github files are working out really well (for me anyway!) in that I’ve made a hundred edits to the Instructions.md text and it allows me use GitHub MarkUp/Down to format the text appropriately, whereas YouTube description text has few formatting capabilities. I messed up on chapters with Episodes 1-3 in that I forgot that you must have a 00:00 chapter! All fixed now.

*** Well! So I thought, Chapters work on some videos and not others.. but the YouTube documentation says at least 1,000 subscribers required.. So need another 900+ subscribers..

Comments are closed.