Use Amazon-SDK Ruby GEM to update the content type after Upload files on Amazon S3

I am running a script that will update the metadata fields on some S3 objects after it has been uploaded to the S3 bucket. During initialization, I set the content by checking the file name Type.

def save_to_amazon(file, s3_object, file_name, meta_path)
puts "uploaded #{file} to Amazon S3"
content_type = set_content_type(file_name)
s3_object.write(file.get_input_stream.read, :metadata => {:folders => meta_path}, :content_type => content_type)
end

This When the S3 content type applies to these objects. I will have problems updating the metadata later. I run something like this:

s3_object.metadata['folders'] = "some string"

At this time, when I run s3_objects.content_type after updating the metadata, an empty string is returned.

s3_object.content_type = not available.

< p>As far as I know, after reading Rdoc, the content type cannot be assigned after uploading the S3 file. I have tried using the metadata method

s3.object.metadata['content_type '] = "some string"
s3.object.metadata['content-type'] = "some string"

Both of these seem to be assigned a new custom metadata attribute , Instead of updating the mime type of the object.

Is there a way to set this, or do I need to completely re-upload the file again?

Explain the tkotisis response in detail, this is how I update the content type using copy_to. You can use s3object .head [:metadata]Extract existing metadata and copy it here.

amazon_bucket.objects.each do |ob|
metadata = ob.head[:metadata]
content_type = "foo/bar"
ob.copy_to(ob.key, :metadata => metadata, :content_type => content_type)
end

Edit

amazon_bucket.objects.each do |ob|
metadata = ob.metadata
content_type = "foo/bar"
ob.copy_to(ob.key, :metadata{:foo => metadata[:foo]}, :content_type => content_type)
end

I am running a script that will update the metadata fields on some S3 objects after it has been uploaded to the S3 bucket. During initialization, I set the content type by checking the file name.

def save_to_amazon(file, s3_object, file_name, meta_path)
puts "uploaded #{file} to Amazon S3"
content_type = set_content_type(file_name)
s3_object. write(file.get_input_stream.read, :metadata => {:folders => meta_path}, :content_type => content_type)
en d

At this point, the S3 content type applies to these objects. I will have problems updating the metadata later. I run something like this:

 s3_object.metadata['folders'] = "some string"

At this time, when I run s3_objects.content_type after updating the metadata, an empty string is returned.

s3_object.content_type = Not available.

As far as I know, after reading Rdoc, the content type cannot be assigned after uploading the S3 file. I have tried using the metadata method

s3.object.metadata['content_type'] = "some string"
s3.object.metadata['content-type'] = "some string"

These two seem to be assigned A new custom metadata attribute is added instead of updating the mime type of the object.

Is there a way to set this, or do I need to completely re-upload the file again?

Describe the tkotisis response in detail, this is how I use copy_to to update the content type. You can use s3object.head[:metadata] to extract existing metadata, Copy it here.

amazon_bucket.objects.each do |ob|
metadata = ob.head[:metadata]
content_type = "foo/bar"
ob.copy_to(ob.key, :metadata => metadata, :content_type => content_type)
end

Edit

amazon_bucket.objects.each do |ob|
metadata = ob.metadata
content_type = "foo/bar"
ob.copy_to(ob.key, :metadata {:foo => metadata[:foo]}, :content_type => content_type)
end

WordPress database error: [Table 'yf99682.wp_s6mz6tyggq_comments' doesn't exist]
SELECT SQL_CALC_FOUND_ROWS wp_s6mz6tyggq_comments.comment_ID FROM wp_s6mz6tyggq_comments WHERE ( comment_approved = '1' ) AND comment_post_ID = 1904 ORDER BY wp_s6mz6tyggq_comments.comment_date_gmt ASC, wp_s6mz6tyggq_comments.comment_ID ASC

Leave a Comment

Your email address will not be published.