A Swift framework with classes for reading and writing bits and bytes.
BitByteData can be integrated into your project using Swift Package Manager, CocoaPods or Carthage.
Swift Package Manager
To install using SPM, add BitByteData to you package dependencies and specify it as a dependency for your target, e.g.:
import PackageDescription let package = Package( name: "PackageName", dependencies: [ .package(url: "https://github.com/tsolomko/BitByteData.git", from: "1.4.0") ], targets: [ .target( name: "TargetName", dependencies: ["BitByteData"] ) ] )
More details you can find in Swift Package Manager’s Documentation.
pod 'BitByteData', '~> 1.4' and
use_frameworks! to your Podfile.
To complete installation, run
Add to your Cartfile
github "tsolomko/BitByteData" ~> 1.4.
Finally, drag and drop
Embedded Binaries section on your targets’
General tab in Xcode.
ByteReader class to read bytes.
For reading bits there are two classes:
MsbBitReader, which implement
for two bit-numbering schemes (
LSB 0 and
MSB 0 correspondingly).
MsbBitReader classes inherit from
ByteReader so you can also use them to read bytes
(but they must be aligned, see documentation for more details).
Note: All readers and writers aren’t structs, but classes intentionally.
This is done to make it easier to pass them as arguments to functions and to eliminate unnecessary copying and
Every function or type of BitByteData’s public API is documented. This documentation can be found at its own website.
Whether you find a bug, have a suggestion, idea or something else, please create an issue on GitHub.
If you’d like to contribute code, please create a pull request on GitHub.
Note: If you are considering working on BitByteData, please note that Xcode project (BitByteData.xcodeproj)
was created manually and you shouldn’t use
swift package generate-xcodeproj command.
Performance and benchmarks
One of the most important goals of BitByteData’s development is high speed performance. To help achieve this goal there
are benchmarks for every function in the project as well as a handy command-line tool,
benchmarks.py, which helps to
run, show, and compare benchmarks and their results.
If you are considering contributing to the project please make sure that:
- Every new function has also a new benchmark added.
- Every other change to any existing function doesn’t introduce performance regressions, or, at the very least, these regressions are small and such performance tradeoff is necessary and justifiable.
Finally, please note that any meaningful comparison can be made only between benchmarks run on the same hardware and software system.