* [S3ClientMultiPartTransferManager] use request object
* [ActionSubmitter] unwrap RemoteKey in log messages
* [ActionSubmitter] rename variable
* [Logging] include log level in info messages
* [LocalFileStream] log when entering directory at level 2
* [UploadProgress{Listener,Logging}: add initial implementations
* [S3Client] def upload not requires an UploadProgressListener as a parameter
* [UploadProgressListener] rename method
* [S3ClientPutObjectUploader] Log upload progress for file <5Mb
Switched to using the AWS SDK V1 for PutObject as the V2 doesn't
support progress callbacks.
* Fix up tests
* Adjust logging levels
* [Filter] added
* [Config] Add filters field
* [ParseArgs] Add '-f'/'--filter' parameters
* [LocalFileStream] apply filters
* [SyncLogging] show filter(s)
* [LocalFileStream] Don't apply filter to directories
The filter may match on a file within a directory, but if the filter
fails on the directory alone, then we weren't recursing into the
directory at all.
* [Filter => Exclude] rename class
* [Config] rename filters field as excludes
* [readme,ParseArgs] change commandline arg f to x and filters to excludes
* [SyncSuite] rename val
* [ExcludeSuite] rename vars
* [SyncLogging] Update message
* [ThorpS3Client] Extract QuoteStripper and S3ClientObjectLister
* [ThorpS3Client] Extract S3ClientUploader
* [ThorpS3Client] Extract S3ClientCopier
* [ThorpS3Client] Extract S3ClientDeleter
* [ThropS3Client] Can select upload strategy based on file size
Currently switches to an alternate that is a clone of the original
method.
* [MD5HashGenerator] Add md5FilePart
Reimplement md5File using md5FilePart
* [MyS3CatsIOClient] extracted
* [S3ClientMultiPartUploader] add tests for accept def
* [S3ClientMultiPartUploader] initiate multi-part upload
* [Md5HashGenerator] add tests reading part of a file = failing test
* [Md5HashGenerator] fix when reading part of a file
* [S3ClientMultiPartUploader] create UploadPartRequests
* [S3ClientMultiPartUploader] uploadPart delegates to an S3Client
* [S3ClientMultiPartUploader] uploadParts uploads each part
* [S3ClientMultiPartUploader] complete upload should completeUpload
* [S3ClientMultiPartUploader] upload file tests when all okay
* [S3ClientMultiPartUploader] Use Recording client in component tests
* [s3ClientMultiPartUploader] remove unused variable
* [S3ClientMultiPartUploader] failing test for init upload error
* [S3ClientMultiPartUploader] Handle errors during multi-part upload
* [S3ClientMultiPartUploader] Retry uploads
* [S3Action] ErroredS4Action now holds the error
* [S3ClientMultiPartUploader] Add logging
* [S3ClientMultiPartUploader] Display warning messages
* [S3ClientMultiPartUploader] test creation of CreateMulitpartUploadRequest
* [S3ClientMultiPartUploader] specify bucket in UploadPartRequest
* [S3ClientMultiPartUploader] verify complete request has upload id
* [S3ClientMultiPartUploader] verify abort request contains upload id
* [S3ClientMultiPartUploader] add logging around retry errors
* [S3ClientMultiPartUploader] verify upload part request had remote key
* [S3ClientMultipartuploaderLogging] refactoring/rewriting strings
* [S3ClientMultiPartUploader] add bucket to abort request
* [S3ClientMultiPartUploader] part numbers must start at 1
* [S3ClientMultiPartUploader] fix capitalisation in comment
* [Config] define maxRetries
* [S3ClientMultiPartUploader] abort request should have the remote key
* [S3ClientMultiPartUploader] display remote key properly
* [S3ClientMultiPartUploader] rename method for plural parts
* [S3ClientMultiPartUploader] log hash and part number
* [MD5HashGenerator] support creating hash from a byte array
* [sbt] add aws-java-sdk-s3 (v1) for multi-part uploads
The reactive-aws-s3-* library is based on the V2 of the Java library,
which doesn't support multi-part uploads.
* [S3ClientMultiPartUploader] use Amazon S3 Client (from v1 sdk)
* [S3ClientMultiPartUploader] include file and offset in upload part request
* {S3ClientMultiPartUploader] Add part etags to complete request
* [S3ClientMultiPartUploader] Use withers to create requests
* [S3ClientMultiPartUploader] don't bounce responses to tags when client accepts then as is
* [MD5HashGenerator] use MD5Hash
* [S3ClientMultiPartUploader] include hash in sending log message
* [S3ClientMultiPartUploader] tests throw correct exception
* [S3ClientMultiPartUploader] Include returned hash in error and log when send is finished
* [S3ClientUploader] Extract as trait, renaming implementations
* [S3Client] upload def now requires tryCount
* [S3ClientUploader] add accepts to trait
* [S3ClientMultiPartUploaderSuite] remove ambiguity over class import
* [S3ClientMultiPartTransferManager] implement and use
* Support multiple filters
* Clean up imports
* [S3ClientLogging] log the remote key value
* Update changelog, readme and long arg name
* [SyncSuite] update test
* [sync] move thunks to s3client to bottom of class
Also, use the thunk methods from within run rather than accessing the
s3client object directly.
* Layout tweaks to put each parameter on own line
* [syncsuite] value renames and move sync.run outside it() call
Future tests will be evaluating the result of that call, so this
avoids repeatedly calling it.
* Add first pass at copy methods and some delete stubs
* [Bucket] Convert from type alias for String to a case class
* [SyncSuite] mark new tests as pending
* [RemoteKey] Convert from type alias for String to a case class
* [MD5Hash] Convert from type alias for String to a case class
* [LastModified] Convert from type alias for String to a case class
* [LocalFile] Revert to using a normal File
* [Sync] Use a for-comprehension and restructure S3MetaData
The for-comprehension will make it easier to generate multiple actions
out of the stream of enriched metadata. The restructured S3MetaData
avoids the need to wrap it in an Either in some cases.
* [ToUpload] Add an wrapper to indicate action required on File
* [S3Action] Stub actions for IO events
* [S3Action] Use UploadS3Action
* [Sync] Fix formating when echoing parameters
* [logging] Change log level down to 4 for listing every file considered
* [Sync] Use a case class to hold counters
* [HashModified] Add case class to replace MD5Hash, LastModified tuples
* [logging] Move file considered logging to source of files
Rather than logging this where adding meta data, move to where the
files are being initially identified.
* [logging] Log all final counters
* Pass Config and HashLookup as implicit parameters
* [LocalFileStream] rename method as findFiles
* [S3MetaDataEnricher] rename method as getMetadata
* Rename selection filter and uploader trait and methods
* [MD5HashGenerator] Extract as trait
* [Action] Convert ToUpload into an Action sealed trait
* [ActionGenerator] refactored and removed logging
* fix up tests
* [LocalFileStream] adjust logging
* [RemoteMetaData] Added
* [ActionGenerator] remove redundant braces
* [LocalFile] Added as wrapper for File
* [Sync] run: remove redundant braces
* [Sync] run: rename HashLookup as S3ObjectsData
* WIP - toward copy action
* Extract S3ObjectsByHash for grouping
* extract internal wrapper for S3CatsIOClient
Remove some boiler plate from the middle of a test
* Explicitly name the Map parameters in extected result
* All lastModified are the same to avoid confusion
We aren't testing this field, just that the keys and hash values are correct.
* Rename variable
* space out object cxreation
* Fix test - error in expected result
Code has been working for ages!
* [readme] condense and simplify behaviour table, adding option delete
Reduce the complexity by only noting the distinct attributes leading
to each action.
Add the action of delete when a local file is missing.
* [S3MetaDataEnricherSuite] rename tests and note missing tests
* [ActionGeneratorSuite] rename tests and note missing tests
* Note unwritten tests as such
* [ActionGenerator] #2 local exists, remote is missing, other matches
* [S3ClientSuite] fix tests
* [S3MetaDataEnricherSuite] #2a local exists, remote is missing, remote matches, other matches - copy
* [S3MetaDataEnricherSuite] drop 'remote is missing, remote matches'
Impossible to represent this combination
* [S3MetaDataEnricherSuite] #3 local exists, remote is missing, remote no match, other no matches - upload
* [S3MetaDataEnricherSuite] Tests #1-3 rename variables consistantly
* [S3MetadataEnricherSuite] #4 local exists, remote exists, remote no match, other matches - copy
* [S3MetadataEnricherSuite] #5 local exists, remote exists, remote no match, other no matches - upload
* [S3MetadataEnricherSuite] drop test #6 - no way to make request
* [ActionGeneratorSuite] standardise tests 2-4
* [ActionGeneratorSuite] #1 local exists, remote exists, remote matches - do nothing
* [ActionGeneratorSuite] Comment expected outcome
* [ActionGeneratorSuite] #5 local exists, remote exists, remote no match, other no matches - upload
* [Action] Add ToDelete case class
* Use ToDelete and fix up return types for DeleteS3Action
* [ActionGenerator] Add explicit case for #1
* [ActionGenerator] Add explicit check for local exists in #2
* [ActionGenerator] match case against #3
* [ActionGenerator] simplify case and match against #5
* [ActionGenerator] Add case for #4
* [ActionGenerator] Remote explicit checks for file existing
If we are called with a LocalFile parameter then we assume the file exists.
* [ActionGenerator] Avoid #1 matching condition #5
* [ActionGeneratorSuite] enable tests
* [test] remove stray println
* [SyncSuite] Add test helper RecordingSync
* [SyncSuite] Use RecordingSync
* [SyncSuite] enable rename test - excluding delete test
* [Sync] log and increment counters for copy and delete
* [Sync] Use case matched RemoteKey in log message
* [Sync] Reorder actioins to do copy then upload then delete
* [S3Action] Drop Move as a distinct action
Can be implemented as a Copy followed by a Delete.
* [S3Action] Actions are ordered Copy, Upload then Delete
This allows sequencing of actions so that all the quick to accomplish
copies take place before bandwidth/time costly updates or destructive
deletes. Deletes come last after they have had the opportunity to b
used as the source for any copies.
* [Sync] Use S3Action's default sorting
* [Sync] extract logging of activity
* [SyncLogging] Extract logging out of Sync
Single Responsibility principle - Sync knows nothing about how it
logs, it just delegates to SyncLogging.
* [Sync] Rename variables and extract sort into private def
* [SyncLogging] Use IO context
* [SyncLogging] Remove moved counter
* [SyncLogging] Clean up an log start of run config info
* Verify that IO actions are evaluated before the program terminates
* [Sync] ensure logging runs
* [ActionGenerator] Don't upload files every time
* [ActionGenerator] fix remote hash for #5
* [SyncSuite] Add tests for delete and delete after rename
* [RemoteKey] Add asFile and isMissingLocally helpers
* [Sync] Generate delete actions
* Remove old extensions upon MD5HashGenerator
* [MD5Hash] prevent confusion by never allowing quotes
This means we need to filter quotes from md5hash values at source
* [Sync] ensure start log message is run
* [ThorpS3Client] Fix passing parameters for source key
* [ThorpS3Client] reformat byKey for clarity
* [S3Client] Add level 5 logging around s3 sdk calls
* fix up tests
* [config,parseargs] Accept v/verbose command line argument
* [parseargs] lowercase program name
* [logging] Log messages based on command line argument
* [readme] update usage
If the remote file is missing then return None.
S3MetaDataEnricher.enrichWithS3MetaData now returns an IO[Either[File,
S3MetaData]]. If objectHead returns None, the this returns the file,
otherwise, the Some[Hash, LastModified] from objectHead is used to
create the S3MetaData as before.