{"id":224676818,"date":"2023-11-18T01:54:15","date_gmt":"2023-11-18T06:54:15","guid":{"rendered":"https:\/\/phonescanada.com\/?p=224676818"},"modified":"2023-11-20T00:21:45","modified_gmt":"2023-11-20T05:21:45","slug":"utilisation-de-meta-sur-emu-video-et-emu-edit-exploiter-lia-generative-pour-les-gif-les-photos-et-les-videos-de-4-secondes","status":"publish","type":"post","link":"https:\/\/phonescanada.com\/fr\/utilisation-de-meta-sur-emu-video-et-emu-edit-exploiter-lia-generative-pour-les-gif-les-photos-et-les-videos-de-4-secondes\/","title":{"rendered":"Utilisation de Meta sur Emu Video et Emu Edit\u00a0: exploiter l&#039;IA g\u00e9n\u00e9rative pour les GIF, les photos et les vid\u00e9os de 4 secondes"},"content":{"rendered":"<div>Meta is announcing through a blog post that they\u2019re busy working on new research into \u201ccontrolled image editing based solely on text instructions and a method for text-to-video generation based on diffusion models\u201d.Which, in simpler words, means they want to put in Facebook and Instagram generative AI tools. The projects Meta is developing are called Emu Video and Emu Edit.<\/p>\n<h2>What is Emu Video?<\/h2>\n<p>This tool, as the name suggests, is for generating video. Meta describes it as \u201ca simple method for text-to-video generation based on diffusion models\u201d. Emu Video should respond to a variety of inputs: text only, image only, and both text and image. The process is split into two steps, Meta clarifies: first, generating images conditioned on a text prompt, and then generating video conditioned on both the text and the generated image.<\/p>\n<div class=\"quote quote-auto nolinks\" style=\"width: 85.0%;\">\n<p>Our state-of-the-art approach is simple to implement and uses just two diffusion models to generate 512\u00d7512 four-second-long videos at 16 frames per second.<\/p>\n<\/div>\n<h2>What is Emu Edit?<\/h2>\n<p>This one should allow \u201cprecise image editing\u201d via recognition and generation tasks. Like Meta says, the use of generative AI is often a process, not a single task.<\/p>\n<p>\u201cEmu Edit is capable of free-form editing through instructions, encompassing tasks such as local and global editing, removing and adding a background, color and geometry transformations, detection and segmentation, and more. Current methods often lean towards either over-modifying or under-performing on various editing tasks. We argue that the primary objective shouldn\u2019t just be about producing a \u2018believable\u2019 image. Instead, the model should focus on precisely altering only the pixels relevant to the edit request. Unlike many generative AI models today, Emu Edit precisely follows instructions, ensuring that pixels in the input image unrelated to the instructions remain untouched. For instance, when adding the text \u2018Aloha!\u2019 to a baseball cap, the cap itself should remain unchanged\u201d, says the Meta team.<\/p>\n<h2>The potential use cases<\/h2>\n<p>The road ahead is definitely AI-driven for Meta.<\/p>\n<p>\u201cAlthough this work is purely fundamental research right now, the potential use cases are clearly evident. Imagine generating your own animated stickers or clever GIFs on the fly to send in the group chat rather than having to search for the perfect media for your reply. Or editing your own photos and images, no technical skills required. Or adding some extra oomph to your Instagram posts by animating static photos. Or generating something entirely new\u201d, the blog post concludes.<\/p>\n<\/div>","protected":false},"excerpt":{"rendered":"<p>Meta is announcing through a blog post that they\u2019re busy working on new research into \u201ccontrolled image editing based solely on text instructions and a method for text-to-video generation based on diffusion models\u201d.Which, in simpler words, means they want to put in Facebook and Instagram generative AI tools. The projects Meta is developing are called [&hellip;]<\/p>\n","protected":false},"author":9,"featured_media":224676819,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"wds_primary_category":255,"footnotes":""},"categories":[255],"tags":[],"class_list":["post-224676818","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-apple-tips-news"],"_links":{"self":[{"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/posts\/224676818","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/users\/9"}],"replies":[{"embeddable":true,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/comments?post=224676818"}],"version-history":[{"count":0,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/posts\/224676818\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/media\/224676819"}],"wp:attachment":[{"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/media?parent=224676818"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/categories?post=224676818"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/phonescanada.com\/fr\/wp-json\/wp\/v2\/tags?post=224676818"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}