Deep learning has been successfully used for computer vision tasks, but its high computational cost limits the adoption in lightweight devices such as camera sensors. For this reason, many low-latency vision systems offload the inference computation to a local server, requiring fast (de)compression of the source images. Texture compression is a compelling alternative to existing compression schemes, such as JPEG or HEVC, due to its low decoding overhead, straightforward parallelization, robustness, and a fixed compression ratio. In this paper, we study the impact of lightweight bounding box-based texture compression algorithms, BC1 and YCoCg-BC3, on the accuracy of two computer vision tasks: object detection and semantic segmentation. While JPEG achieves superior per-pixel error rate, the YCoCg-BC3 encoding can provide comparable vision accuracy. The BC1 encoding results in significant degradation of vision performance. However, by retraining the FasterSeg teacher network with a BC1-compressed dataset, we reduced its segmentation mIoU loss from 2.7 to 0.5 percent. Thus, both BC1 and YCoCg-BC3 encoders are suitable for use in low latency vision systems, since they both achieve significantly higher encoding speed than JPEG and their decoding overhead is negligible.