Blog > General Interest

Employee Spotlight Series: Jacyln Kan, Data Scientist

Our engineering team is pivotal at Energy Toolbase, ensuring the precision of our software controls. Dedicated to maintaining the highest performance standards, they diligently monitor and diagnose any performance issues that arise in the field, working to identify and resolve these problems promptly. Their rapid response minimizes downtime and ensures our users can efficiently deploy their solar and energy storage solutions. Our engineering team ensures that our software remains robust, accurate, and user-friendly, enabling our customers to achieve optimal performance from their energy systems.

Meet Jacyln Kan, one of Energy Toolbase’s Data Scientists from Calgary, Canada. With three years of experience at Energy Toolbase, Jaclyn provides insights into the role of a Data Scientist at ETB, detailing the end-to-end process of designing machine learning models, her favorite project, and what the future holds for Energy Toolbase. 

Q: Tell us about your job responsibilities as a Data Scientist at Energy Toolbase.
A: As a data scientist, I have a range of responsibilities that allow me to work with many teams across ETB. One of our main duties is, of course, to create and maintain the models that generate site load and PV forecasts. Any performance issues seen on the field are directed back to us so that we can triage the problem and push out fixes as soon as possible. Additionally, we are responsible for ensuring that the microservice that serves these forecasts in real-time is reliable and robust. The data science team also plays a big role in ETB Developer with our optimization engine. This engine works to take in the forecasts, tariff information, hardware specifications, etc. for a site and then calculates the optimal behavior of the battery to save our customers as much money as possible. If our Product team decides to add support for new tariffs or incentive programs, we will be on the scene to help make this a reality in both Developer and the Acumen EMS™.  
Q: Tell us about how you got started as a Data Scientist. What drew you to that career?
A: I graduated from university with a software engineering degree, but I have always especially enjoyed classes that had to do with machine learning or data analysis. In my last year at the University of Calgary, I had the opportunity to collaborate with ETB for my capstone project. My team and I decided to create a mobile app version of the ETB Monitor product and won 1st place in our department for our work. Through this, I was able to get an introduction to what ETB offers its customers and was fascinated by their data science team. Three years later, I still cherish the opportunity to tackle challenging problems with such intelligent colleagues and deeply appreciate the culture of our company.  
Q: Can you describe the end-to-end process of developing a machine learning model, from data collection to deployment? 
A: The first step of creating an ML model is data collection. Our models rely on time-series data, meaning that we need to be careful not to disrupt the chronological order or introduce any gaps, as this could significantly impact the accuracy and functionality of the model. Then, we move on to cleaning and validating the data, and it is split into training and testing sets. We then create/train a multitude of models (both ML and statistical) using the training data. Afterward, each model generates predictions for the duration of the test set. We feed these predictions into our optimization engine which simulates the savings that would have been achieved in the field had this model been deployed to our Acumen EMS. We select the model with the best savings and deploy it onto the edge site. This is a continuous process handled by our model pipeline to ensure that all our sites have models that are up to date with the latest data and that each site has a model that has been specifically curated for it.  
Q: What are some common challenges you face when deploying machine learning models in a production environment, and how do you overcome them? 
A: A common challenge that we face is poor data. If a site becomes offline or there is an issue with the meters, the data quality may be impacted which consequently results in a model that forecasts poorly. To deal with this issue, we always test for regressions in performance before the deployment of a new model. However, if there are major issues with the data, it may require manual intervention for us to determine at what point the data becomes unreliable. We are always working on ways to improve this process to ensure our models are continuously improving. 
Q: How does Energy Toolbase’s Acumen EMS rely on machine learning? 
A: Our models, machine learning and otherwise, help to provide site load and PV forecasts. These forecasts work to inform the Acumen EMS of what is to come so that it can prepare to dispatch the battery where needed to increase savings for the customer. The microservice that houses these models and serves these forecasts is part of the Acumen EMS and communicates with many other services that make up the entire control system.  
Q: What are some exciting things you’re working on and working towards at Energy Toolbase? 
A: We are working on refining our models to better adapt to the specific characteristics of each site. This tailored approach ensures that our predictions and analyses are as accurate and relevant as possible for our customers. Alongside these customization efforts, we are also enhancing our model pipeline process. By implementing robust tracking mechanisms, we can monitor each step of the model’s development and deployment. This not only allows for a smoother workflow but also ensures that any issues can be quickly identified and precisely recreated, thereby speeding up resolutions and maintaining our commitment to quality and reliability.  
Q: What has been your favorite project to work on in your time at Energy Toolbase?
A: I am always proud to talk about our work on the optimization engine. This engine helps to bridge the gap between our Developer product and the Acumen EMS, ensuring that what the customer simulates in Developer is what they’ll see on the edge. We built this engine to adapt to any tariff and can be updated to optimize for many incentive programs. A lot of work on the data science team is behind the scenes, but with this project, I was able to gain a lot of insight into what our customers prioritize when they run their simulations on Developer. It was an eye-opening experience and I loved being able to work with many people across the ETB team on this initiative! 

Recent Posts

Blog Categories

Follow us on

Upcoming Webinars

Newsletter

Get our content straight to your inbox

subscribe to our monthly newsletter and receive our blogs, webinars and other announcements.