ApperceptMD

Raising capital to build app that will triage burn injuries.

Facebook Twitter LinkedIn

We are a medical device company developing technology to triage burn injuries.

Each year in the United States, significant burn injuries affect 500,000 people; of those, 40,000 burn victims are hospitalized (American Burn Association, 2021). Among fire/rescue personnel, 13% of fatalities are caused by severe burns (National Fire Protection Association, 2020), and in the military, 16% of all those wounded in combat suffer from burns (Statistica, 2021). Surviving a severe burn injury requires accurate diagnosis and prompt specialized treatment – both of which are dependent on the precise determination of burn severity derived from the extent and depth of the burn. Inaccurate determination of burn depth and misdiagnosis of the severity of burns can lead to delayed treatment, insufficient level of treatment, and poor health outcomes including higher mortality rates, disability due to contracture healing, and scarring. Even small to medium size, non-life-threatening burns require the accurate identification of burn depth to determine immediate or delayed transport by emergency medical services. Although burn depth diagnosis leads to clinical decision making (patients being admitted, transferred, or dying), the current system is fraught with time delays (transport to the hospital, triage in the emergency room, and waiting for an expert), where survival is usually dependent on speedy access to specialized medical services. In addition, misdiagnosis leads to time delays, inefficient use of medical transport services, increased infection rates, unnecessary surgeries, and an overall lower standard of care. As a result, accurately assessing burn depth is crucial to determine healing potential, which is based on the histological elements in the affected layers of skin.
            Burn injuries are classified into three categories: 1) First-degree burns (superficial burns) affect only the outer layer of skin called the epidermis, but not the basal membrane, which contains the regenerative skin cells (keratinocytes - aka epidermal cells). These burns cause pain, redness, and swelling, but no specialized treatment is often necessary because they heal on their own; 2) Second-degree burns (partial-thickness burns) affect both the outer and underlying layer of skin called the dermis, where the basal membrane is damaged. These burns usually cause severe pain, blistering, and appear wet, and require specialized treatment for healing; and 3) Third-degree burns (full-thickness burns) affect all layers of the skin by penetrating the entire dermis into the fatty subcutaneous tissue beneath the dermis, which causes a leathery appearing layer of tissue without much sensation or pain. These full-thickness burns typically destroy all the regenerative cells capable of healing the skin, thus requiring surgical interventions like skin grafts to properly heal.
When the skin is burned (see figure - firstaidforfree.com), the capillaries of the injured tissue are damaged because of protein denaturation. The burn leaves the capillaries in   a zone of coagulation, stasis, and hyperemia (Jackson, 1947; Hettiaratchy & Dziewulski, 2004). The zone of coagulation, usually in the center of the burn, is the primary injury, which has irreversible tissue necrosis with no pulse-related color change from blood flow. The zone of stasis is the secondary injury that can regain blood flow in the dermal circulation. The zone of hyperemia is the periphery of the burn and is characterized by enhanced pulse-related color change, thus signifying blood flow and inflammation. The intact capillaries can indicate the depth of a burn to the approximate tissue based on the level of blood flow in the injured skin.
            Because the detection of blood flow is so crucial for the assessment of burn depth, all of the available instrumentation for the non-invasive determination of burn depth consists of highly specialized optical systems based on blood flow, such as Laser Doppler Imaging, Laser Speckle Imaging, Multispectral Imaging, Near-Infrared Spectroscopy, Indocyanine Green Videoangiography, Capillary Microscopy, Thermal Imaging Camera, Active Dynamic Thermography, Short Wave Infrared Light Technology, Optical Coherence Tomography, Photoacoustic Imaging and Microscopy, and Spatial Frequency Domain Imaging (Kaiser et al., 2011; Wearn et al., 2018; Calin et al., 2015; Lee et al., 2020; Simmons et al., 2018).
            In addition to the hardware requirements for all these applications for the non-invasive determination of burn depth, all existing systems can only be used in controlled and stationary environments, essentially an operating room setting, with cleansed wounds, intravenous access, anesthesia for motion control, and lighting, to name a few. These environments produce circumstances that cannot be achieved in a field setting or a crowded emergency room. Other limiting factors of current burn depth technology are high costs, special hardware that is heavy and not mobile for emergency rooms, and expertise for interpretation of the images. As a result, current burn depth technology is not universally accessible because of high cost and complexity and is also not entirely conducive due to the high level of expertise required to use and interpret the output of such technology. The fact that burns occur in places where there are no burn experts or bulky technology reveals the high need for an easy-to-use, internet-enabled application for determining burn depth classification.  
            Because current burn depth technology is not universally accessible, some researchers have recently turned to the latest advances in image-based deep learning, which show promising results for burn injury classification in laboratory conditions. These automatic burn classification systems use learning-based approaches with high demands on annotated datasets (Chauhan & Goyal, 2020; Yadav et al., 2019; Suvarna et al., 2013; Jiao et al., 2019; Tran et al., 2016; Son Tran, et al., 2016). State-of-the-art publications in this field show promising results but require huge databases and high-quality annotations for reliable classification. Annotations from domain experts are essential for the success of such approaches. Unfortunately, large burn datasets are not available to the public and lack sufficient annotations by experts, thus making the use of previously developed deep learning systems unreliable in clinical practice for the assessment of burn depth based on standard still images. The available datasets are not adequate for the consistent classification of burn injuries in real-world situations because of the required variety, the amount of burn injury samples, and the quality of annotations, which are all major cost factors in current burn depth technology.
            In summary, there are only two main systems currently in use for burn depth classification, specialized optical imaging hardware, and image-based deep learning. However, only a few non-invasive methods can penetrate deeper into the skin tissue to detect blood flow. Jeffrey Thatcher summarized in his 2016 Comprehensive Invited Review, “Clearly, there are better alternatives to the most commonly used burn assessment techniques, clinical judgment, and photography, but none are currently as user friendly” (Thatcher et al., 2016). What is needed in the field is a change in burn depth technology that is not only user friendly but is also universally accessible enabling minimally trained users to effectively triage in emergency settings.
            Lately, there have been amazing advances with machine learning in video analysis allowing for amplifying subtle properties of human skin and blood circulation in videos recorded with standard devices, such as high-end, ubiquitous smartphones. The quality of smartphone image capabilities, along with the latest software processing speeds, can be used for rule-based systems as well as for shallow and deep learning models. Modern methods of video pre-processing allow enhancement of subtle changes in human skin related to blood circulation (Rubinstein et al., 2017; Wadha et al., 2014, 2016; Wu et al., 2012; Balakrishnan et al., 2013; Ganfure, 2019). These methods show promising results in pulse rate detection, but do not evaluate injured skin or other human tissues. Ahmed et al. compared different infrared systems for vein detection and showed that infrared video cameras can be used to enhance and visualize blood veins with non-invasive methods (Ahmed et al., 2017; Ahmed et al., 2018). While the proposed system allows visualization of veins in real-time controlled environments, stationary video setups and infrared camera sensors are required. Nonetheless, burn depth classification has not been addressed within this system.
              To our knowledge, no system or device exists to evaluate burn depth by using enhanced standard video and standard hardware. ApperceptMD proposes that using domain-specific, video enhancement methods for extracting spatiotemporal features for burn depth estimation would lead to a reliable assistance system for field usage by paramedics, military medics, etc. We will extend the concept of video magnification (Rubinstein et al., 2017; Wu et al., 2012) by combining temporal, spatial, and color information from videos into domain-specific features to detect the depth of burn injuries. Specifically, we propose that more robust classifiers can be developed from subtle variations in skin color, provided that environmental and recording artifacts in videos, such as camera and object motion, are limited. Including temporal, spatial, and color information should lead to more robust classification systems while reducing dataset requirements and constraints.
            When the aims of this project are achieved, our product, Video Enhanced Burn-depth Analysis (VEBA), will improve technical capability and clinical practice by reducing misdiagnosed burn injuries in triage settings. One contribution will be integrating the pre-processing of video clips to enhance subtle differences over real-time to classify burn depth. The ultimate goal will be the ability to capture the video with standard technology (smartphone) and draw machine learning-based conclusions regarding burn depth. The system will be deployable to high-end rugged smartphones that have powerful imaging sensors, where the main functionality will be accessible without an internet connection. The video preprocessing data should then be easily transferable to assisting medical providers via an active internet connection or physical download link.
            VEBA has the commercial potential to lead to a marketable product because our smart-device deployable software application, the no-patient-touch system will be the only one on the market using domain-specific, video enhancement methods to extract spatiotemporal features for robust burn depth estimation. In addition, our system should be easily marketable because it solely relies on standard video clips and therefore avoids expensive hardware. We expect it to become commercially available by 2026. All research and development to commercialize our product will be carried out in the United States, where we will market our system to the Department of Defense, other government agencies, health care organizations, emergency rooms, emergency medical services, etc. As a result, this new technology will enable untrained users to effectively triage in emergency rooms, mass casualty situations, and battlefield settings. This will not only improve triage, diagnostic accuracy, and patient outcomes but will also reduce time to make treatment decisions and eliminate unnecessary transfers and surgeries. We will take advantage of current smartphone camera/imaging technology and faster onboard processor speeds. Therefore, no special hardware is required for recording.

Innovation
            ApperceptMD’s proposed solution, VEBA, is innovative because it will shift clinical practice paradigms of utilizing non-user-friendly technology for assessing burn depth by offering “better alternatives to the most commonly used burn assessment techniques” (Thatcher et al., 2016). Our innovative technology will shift clinical practice paradigms in the following ways: 1) Efficiency – Creating a mobile app interpreting burn depth is both innovative and user friendly because it is much more ubiquitous and cost-effective than the expensive optical hardware currently employed. Because burn depth technology is not universally accessible due to excessive costs and complexity, our cost-effective innovation will better control the over-and under-triage problem of unnecessary costs and wasted resuscitation resources. As a result, we will avoid unnecessary and complicated sensors making our product too expensive for mass rollout. 2) Expertise – Using video enhancement and machine learning to interpret video imagery is both innovative and user-friendly because it does not rely on a high level of human expertise to interpret output from current burn depth technology. The application-specific video preprocessing method and classification algorithm based on machine learning will urgently speed up the triage process with a system supporting a decision if the patient must be immediately transferred to a burn center. As a result, our innovative solution will enable easily trained users to effectively triage in emergency rooms, mass casualty situations, and battlefield settings. 3) Usability – Utilizing a mobile app on a smartphone (non-contact and noninvasive) to read burn depth is both innovative and user-friendly because it can be mobilized in field situations where there is no access to burn depth technology, which is currently not portable. Because current burn depth technology is not mobile for emergency rooms, our deployable platform will make the technology easily transferable to medical providers through local processing. In addition, with current advancements in smartphone imaging technology, our solution will be intuitive where anyone with limited education could use it effectively, making it simple enough to explain to an untrained user.
            VEBA’s solution is broadly applicable in multiple fields for assessing burn depth. For example 1) Telehealth Ready – Our new software-based approach can be applied to rural applications to assist medical personnel in emergency room settings, where existing telemedicine systems for burn care rely on simple photography. These images are transmitted to a burn expert who then helps to determine the disposition of the patient. While advances in telemedicine over phone calls alone have made it easier for experts to triage patients, often still images are insufficient to determine burn depth. 2) First Responders – Our new approach will improve triage decisions made by paramedics, emergency medical technicians, firefighters, etc. in mass casualty situations. In many instances, mass casualties involving burns remain undiagnosed until patients are relocated to emergency facilities often miles away, which introduces critical time delays. Our solution is portable to any smart device with a camera. Therefore, our portable system will assist first responders with burn depth diagnoses leading to better clinical decisions, where patients are admitted to a hospital or transferred to a burn center. 3) Military Triage – Our new approach will help field medics and triage personnel diagnose burn depth casualties more quickly in war situations. Current burn depth technology is not conducive to battlefield conditions because burn triage must be carried out behind battle lines and is entirely dependent upon human expertise. In combat and mass casualty situations, survival of a severe burn injury is dependent on accurate diagnosis and speedy access to specialized medical services. Accurate triage may determine bed availability for severely injured individuals. However, as is often the case in war situations, there are too few doctors with burn treatment domain expertise available in the field. In addition, bulky hardware is not practical in extreme war conditions, where internet connections are not available. Our smart-device deployable system is a non-contact software application adding zero additional weight to military personnel that will ultimately reduce exposure of the soldiers and medics to other toxic factors (e.g., chemical burns, nuclear exposure).
            ApperceptMD proposes a new application to assess burn depth by enabling minimally trained users to effectively triage in emergency rooms, mass casualty situations, and battlefield settings. To our knowledge, no burn depth classification system currently uses spatiotemporal features from enhanced standard video for tissue damage assessment. In addition, our new application is novel because no one has ever tried to create an objective, portable, on-site, real-time, burn depth diagnostic tool using RGB technology to classify human burns (still images have been used, but not videos). As a result, the innovative contribution of this project will be integrating spatial image features (how colors are arranged in a still image) with temporal image features (how color and position change over time in videos) to enhance subtle differences over real-time in order to classify burn depth. The ultimate goal will be the ability to capture the video with standard technology (smartphone) and draw machine learning-based conclusions regarding burn depth. The proposed video analysis is capable of emphasizing blood circulation in standard videos in real-time and might be used in a variety of existing systems and for novel applications. Our approach reduces the project failure risk as existing classification systems would also benefit from temporal features. The increase in classification reliability would allow for integrating research findings of previous work into robust real-world applications.

Ready to Ask For Funding for your company?

Post a Funding Request

ApperceptMD is no longer seeking funding.