So, you want to Typeset Anime, but have no prior experience?
When you have come this far, I hope you know what Typesetting refers to. Many people have come to me in the belief that Typesetting is simply the art of selecting nice dialog fonts and fabulous colors to match the characters or the feel of the anime. This is not really the case, what it basically means is the art of changing or modifying signs to show the translation as natural as possible. Actually, you know you have succeeded when people think the translation is a natural part of the video. They say an image says more than 1000 words, so I hope an video sequence of a translated sign will get you the idea of what you will be doing:
This guide is meant to help you to start typesetting in AFX. Before reading this guide it will help to have some basic knowledge of the program.
A tutorial I myself used when I was starting to use AFX, the “Basic Training tutorial” on videocopilot, will cover most of the basics of using the program. In this guide I won’t explain things previously explained by the Basic Training tutorial. So before you start make sure to watch it:
Done? ok, then lets proceed
Encoding & AFX
I will here go through the basics of encoding clips to be used in AFX, and how to render files in AFX.
- Encode - the act of converting or transforming video clips to different formats (or codecs: DivX, XviD, H264, etc.)
- Render - a sort of encoding but in AFX. Rendering will make a hard copy of your current workspace (or composition) in AFX
- Overlay - Putting your AFX Rendered video on top of the original raw (often the actual WorkRaw). Overlay can also mean the actual Rendered sequence, you say that you render an Overlay from AFX.
- RGB32 & Alpha - Normally computer based images are built up of three different colors: Red, Green and Blue (in short RGB.) RGB32 also contains a so called Alpha layer. This tells what parts of the image is visible and how much.
I myself consider cutting video files to be one of the most boring things about Typesetting, but its necessary in two aspects.
- It is a great way to check your complete work against the original video, to check for something you might have gotten wrong. Often you forget some small detail that you will se clearly when you overlay the render.
- Encoders are lazy, often you have to encode these clips, simply because the encoder wont.
AviSynth - http://sourceforge.net/project/showfiles.php?group_id=57023&package_id=72557
Download latest version
VirtualDub - http://virtualdub.sourceforge.net/
Look for the stable 32bit (x86) version. It is important to avoid downloading the x64 version of Vdub since it does not support as many codecs as the 32bit version.
Lagarith - http://lags.leetcode.net/codec.html
Look for the Lagarith Installer
AviSynth Plug-ins (AviSynth Required) - TE_avisynth_Plugins_(v1.1).7z
Latest version of the plug-ins I have for AviSynth. Extract and replace the plug-in folder in the AviSynth folder with this one.
Python - http://www.python.org/download/
You need version 3.0.0 or later
Python Scripts - TE_Python_Scripts_v2.1.7z NEW
Extract these script in to your VirtualDub folder
Missing DLL’s (Vdub Required) - Missing_DLL’s.7z
Download this only if you get an error about missing DLL’s when you load a video in to Vdub (Also works for Megui). Put the DLL’s in the root folder of the application (where “vdub.exe” and/or “MeGUI.exe” is located).
mkunicode.dll - Is a part of the Haali Media splitter
d3dx9_30.dll - Is a part of the April 2006 update of DirectX (Direct3D x9)
Working with AviSynth (Part 1)
When you have all programs installed we can proceed to cut the video. Lately I have with some help from my friend kile made the process of making the AviSynth file completely automatic. You only need the required files for this to be completely automated.
What you do is to find the raw you want to cut your clips from and drag and drop the file on the python project file called “Auto-AVS.py” in your VirtualDub folder. Alternatively you can make a shortcut of the python file and drag+drop on that. Or even better put the shortcut in the “Send To” folder in windows.
If you already have clips encoded by the encoder, but not in a codec suitable for AFX, for example codecs like h264/DivX/XviD and file extensions like .mkv or .mp4 as these wont work well with AFX, you can use the Auto-AVI.py script. This will encode your file to Lagarith with the correct settings automatically, you can choose several files to process at the same time.
But first of all we need to know what Aspect ratio your raw is in. To do this we open the file in MPC-HC (Media Player Classic – Homecinema.) If you don’t have MPC-HC as your preferred player: right click your video file, choose: Open with > Media Player Classic – Homecinema.
If Media Player Classic – Homecinema is not listed:
click on: “Choose default program…” And click the “Browse…” button. Navigate to where your MPC-HC.exe file is located.
If you want you can unmark the box that says: “Always use the selected program to open this kind of file.” (You may have to do this for each file extension.)
When you got the file opened in MPC-HC (you may choose to pause the video at any time) right click on the image and choose: Properties (Shift+F10) > in properties go to the Details tab.
Here you will see all relevant details listed. We need to know the display resolution, that is to say, the resolution which the video will be displayed when playing it. This is listed right beside the codec information and in my example we can see that it’s set to display in 853:480 although the actual size is 848×480. The video player will be stretching the actual 848×480 pixels to the set value 853:480.
This is a bit complicated to explain and quite frankly, you really don’t need to know more unless you really want to explore it more. Drop me a PM on IRC if you wish to know more about it, and I will explain to you in person. But right now we just need to know that the end result will be displayed at 853:480, instead of 848×480. This will be different from file to file, and sometimes (I can even go so far as to say mostly) you don’t need to change resolution on your clips. If you are uncertain of what to do here, don’t be afraid to drop me a PM on IRC, ill happily help you personally.
Next, we need to make sure we got all files in the correct place. Download Python and the python project files. And follow the procedure previously explained. The script will automatically open up VirtualDub after creating the AviSynth script, so you can continue with cutting out the clips.
So, I realize I’ll get yelled at for advising people to use MPC-HC to check codec, aspect ratio and frame rate. And really, MPC-HC is maybe just for the ones that’s really new, and have no experience of fansubbing or encoding. If you feel like “going more pro” you can always choose to use MediaInfo instead. Media info is included in k-light codec pack, but I can’t say for sure in other codec packs. If you have k-lite, you just have to right click on the video file and go to: Send to > MediaInfo. Otherwise you can download it here. Here is a picture of how it looks with k-lite codec pack.
Cutting and Encoding with VirtualDub
Cutting clips in VirtualDub is done completely in an UI, no need to type anything out (as in trim() in AviSynth). Also when you go to the time you don’t need to specify hour, just minute and second: mm:ss. A neat feature in Vdub is the ability to scan for scene changes, this makes the cutting much faster as you don’t have to do it manual.
When you cut clips, you want to find the Scene Changes before and after the sign has been displayed. That is to say, the moment where the majority of the image changes. In my example it is pretty easy since my sign is one scene, I choose to cut from the first frame of this scene to the first frame of the next scene. However, when you have fading in and/or out, you will have to proceed further until you can see a clear scene change.
In my example I will not use any keyboard shortcuts, but those you want to remember is: Ctrl+G to go to a specified frame or time. F7 to save as AVI. In my example I have the first sign located at the time: 01:21 (mm:ss), so I want to type that in the dialogue that pops up when I press Ctrl+G.
Don’t forget to remove one frame from the end when you save the AVI. In my example i cut from frame 1919 to 1978, but the last frame will never show up in AFX, so when you save you need to remove the last frame in the name. So I will name my clip: 1919,1977.avi, and click save.
If you cut with AviSynth, you just need to copy whats inside the trim() tag. In this case I would have the following in my .avs script:
Now all that is left is to encode the clip. Go to: File > Save As AVI… (F7). Choose location for your .avi file and name it the same as the frames it ranges.
It will now proceed to encode your clip, depending on how fast your computer is, it may take long time.
Now when we have our clip encoded and ready, we can finally go to AFX to Apply the Translation and render the overlay.
When we have gotten this far, its time to make our sign in AFX. When this is done we will have to get the sign out from AFX, and we do this by rendering.
At this time you can go through one of the different tutorials available so far, but I recommend you either just watch the sign used as an example for this tutorial, it can be found Here. Or just continue, to get to know the rest of the basics before jumping over the signs tutorials.
Here I will show you how I personally set up my AFX when I render my signs. Some of what I will show you wont be required, this you can edit as you want later when you got more experience. I will put what is optional in boxes, so its clear what is objective and whats not.
To render your finished AFX projects, begin by activate the composition you want to make a hard copy of.
When I render my finished composition, I first rename the composition to the same as the clip frames. That is to say whats inside the trim() tag in the avisynth script. For the example it will be “1919,1977″. When you drag and drop your file on the “Create new composition” button your composition will be named the same as the file name. If you renamed the file as I suggested before, it will automatically get the right name.
When you have chosen your desired comp, go to: Composition > Add to Render Queue (Ctrl+Shift+/). Your composition will now be added to the render queue.
In the render queue window, if the yellow text beside “Output Module” does not say “Lossless with Alpha”, click the little arrow in between, and choose “Lossless with Alpha”. Now click the yellow text that now says “Lossless with Alpha”, this will open the Output Module Settings Dialogue.
First thing we have to do, is to change the codec, instead of the default “No Compression”, we will choose Lagarith. Click the “Format Options…” button, and by “Video Codec:” choose “Lagarith lossless codec”, then click “Codec Settings” and make sure it is set to RGBA.
Back in the Output Module Settings dialogue we need to sett Channels to: RGB + Alpha, and Color to: Straight(Unmatted).
At this point you can make this a template to avoid having to always change format, channels and color. Click OK to go back to the render queue window, here click yet again on the arrow between the yellow text and “Output Module”. Choose “Make Template…” and rename it to to something like “Lagarith”. I recommend to also make this the default choice for rendering, click the list by “Movie Default” and choose Lagarith after saving it.
To not override the old .avi file, you may choose to save in an other folder or rename your file.
AFX can auto generate the filename for you, and you can customize this to fit what you want included in the final file name. To customize click the arrow by “Output to” in the render queue window and choose: “Custom…” this will open up the File Name Templates dialogue. Here you can add whatever property you want to be included. I will list what I have set it to:
[compName]_-_[projectName]_[[outputModuleName]].[fileExtension]You can use this or set your own, save and set as default preset.
If you don’t want AFX to auto generate your file name, you can click the yellow text to change folder and/or file name.
If you didn’t resize your script in avisynth prior to making your sign, you are now ready to hit the “Render” button. But for my example file I have to scale it back to the original pixels (848×480.) To do this I click the yellow text by “Output Module” in the render queue. I mark the “Stretch” box, unmark the option to “Lock Aspect Ratio” and set “Strech to:” to my default resolution (the one before the aspect ratio in MPC-HC:
Video: MPEG4 Video (H264) 848×480 (853:480) 23.98fps [English (Video 1)]
One last thing you will have to think about. When rendering your video from AFX, make sure you turn off visibility on your video clip. This is very important because the sign will not be an real overlay unless you have the parts that’s not edited invisible. That is to say, in my example script I just want the translation to visible, nothing more (null objects wont be shown when rendering, so these are safe to keep visible even when you render).
An other way to make the clip invisible is to Right-Click on it and choose: Guide Layer. This will render the chosen clip invisible only during render or when you nest compositions. It will however continue to be visible in the same composition as its set to Guide Layer in.
There is several reasons why you do this:
- The clip you encode is from a workraw, maybe the encoder changed some setting later and if you render your video with the clip in the background it can result in that that part of the video looks different.
- You will have a file size that’s much bigger if you render everything and have nothing invisible.
- It will be much easier to check if your sign is correctly timed, or cut.
There is some things you should think of when you don’t have any background. Layer modes for example wont work. you will have to either cut a small background from the clip, or compensate for this in some other way.
Working with AviSynth (Part 2)
Now we have a rendered overlay ready to head of to the encoder, but first we want to make sure it looks like it should. We do this by checking it in AviSynth. If you have followed the previous part and added an Automation for your file name, you should have something like this:
I named my project Train.
Open up your previous .avs script, and remove the trim parameter. Replace it with:
avisign("RenderedClip.avi").sign(FirstFrame,LastFrame)This will tell avisynth to load your overlay, and put it between “FirstFrame” and “LastFrame” (1919 and 1977 in the example.) You can remove all but the parentheses and the frames inside trim(), because the frames should be the same as for: sign(). In my case I have to put my line before resizing, because I changed back the resolution to the original one (848×480) in AFX.
You can choose to only specify “FirstFrame”, it will then let avisynth calculate the last frame based on the amount of frames in your rendered clip. This is good for testing so your overlay is the same amount of frames as the sign. Go to the last frame and see if it went beyond or if its lacking one or more. (Credits for script goes to to Pichu)
If you for some reason don’t want to replace trim(), you can put a # in front of it. Putting # in front of text in avisynth nullifies all text after the # on that same line. You will notice all text behind the # go green in Avsp so you can easily see what parts will not be processed, you can put anything behind the #, like an comment for instance. You will of course have to make a new line for avisign() then.
To be continued?
All Done !?
Hey gratz on making it this far, now you are ready to jump on one of the (soon to be) many tutorials. I recommend to go easy first before jumping over to the more advanced ones… blah bla, who cares it’s your life right?
Anyway, the easy tutorials you can find listed Here.